From Wikipedia, the free encyclopedia
Regression analysis is any statistical method where the mean of one or more random variables is predicted conditioned on other (measured) random variables. In particular, there is linear regression, logistic regression, Poisson regression, supervised learning, and unit-weighted regression. Regression analysis is more than curve fitting (choosing a curve that best fits given data points); it involves fitting a model with both deterministic and stochastic components. The deterministic component is called the predictor and the stochastic component is called the error term.
The simplest form of a regression model contains a dependent variable (also called "outcome variable," "endogenous variable," or "Y-variable") and a single independent variable (also called "factor," "exogenous variable," or "X-variable").
Typical examples are the dependence of the blood pressure Y on the age X of a person, or the dependence of the weight Y of certain animals on their daily ration of food X. This dependence is called the regression of Y on X.
See also: multivariate normal distribution, important publications in regression analysis.
Regression is usually posed as an optimization problem as we are attempting to find a solution where the error is at a minimum. The most common error measure that is used is the least squares: this corresponds to a Gaussian likelihood of generating observed data given the (hidden) random variable. In a certain sense, least squares is an optimal estimator: see the Gauss-Markov theorem.
The optimization problem in regression is typically solved by algorithms such as the gradient descent algorithm, the Gauss-Newton algorithm, and the Levenberg-Marquardt algorithm. Probabilistic algorithms such as RANSAC can be used to find a good fit for a sample set, given a parametrized model of the curve function.
Regression can be expressed as a maximum likelihood method of estimating the parameters of a model. However, for small amounts of data, this estimate can have high variance. Bayesian methods can also be used to estimate regression models. A prior is placed over the parameters, which incorporates everything known about the parameters. (For example, if one parameter is known to be non-negative a non-negative distribution can be assigned to it.) A posterior distribution is then obtained for the parameter vector. Bayesian methods have the advantages that they use all the information that is available and they are exact, not asymptotic, and thus work well for small data sets. Some practitioners use maximum a posteriori (MAP) methods, a simpler method than full Bayesian analysis, in which the parameters are chosen that maximize the posterior. MAP methods are related to Occam's Razor: there is a preference for simplicity among a family of regression models (curves) just as there is a preference for simplicity among competing theories.
Contents[hide] |
Purpose and formulation
The goal of regression is to describe a set of data as accurately as possible. To do this, we set the following mathematical context:
will denote a probability space and (Γ,S) will be a measure space. is a set of coefficients.
Very often, and with .
The response variable (or vector of observations) Y is a random variable, i.e. a measurable function:
.
This variable will be "explained" using other random variables called factors. Some people say Y is a dependent variable (because it depends on the factors) and call the factors independent variables. However, the factors can very well be statistically dependent (for example if one takes X and X2) and the reponse variables can be statistically independent. Therefore, the terminology "dependent" and "independent" can be confusing and should be avoided.
Let . p is called number of factors.
.
Let .
We finally define , which means that or more concisely:
- (E)
if we accept the convention that X is either a matrix with one factor per column or a single vector if p = 1. For example, Y could be the number of correct answers to a test and X could be the age of the person undertaking the test. The last term, , is a random variable called error which is supposed to model the variability in the experiment (i.e., in exactly the same conditions, the output Y of the experiment might differ slightly from experiment to experiment). This term actually represents the part of Y not explained by the model η.
The general form of the function η is known. In fact, the only element we don't know in the equation (E) is θ. The aim of regression is, given a set of data, to find an estimate of θ satisfying some criterion.
Doing a regression takes three steps: (1) deciding what kind of function η we will use, (2) choosing the criterion to optimize, (3) finding and computing an estimator for θ.
Choice of the regression function
Linear regression
For continuous variables, linear regression is the most common case in practice because it is the easiest to compute and gives good results. Note that by "linear", we mean "linear in θ, not "linear in X". When we do a linear regression, we are implicitly supposing that and with and that given a set of factors , the best approximation of the response variable Y we can find is a linear combination of theses factors . The aim of linear regression is to find the right coefficients of this linear combination.
We choose η the following way:
Logistic regression
If the variable y has only discrete values (for example, a Yes/No variable), logistic regression is preferred. It is equivalent to making a linear regression on the odds ratio. The outcome of this type of regression is a function which describes how the probability of a given event (e.g. probability of getting "yes") varies with the factors.
In order to solve this problem efficiently, several methods exist. The most common one is the Gauss-Markov method, but it requires extra hypotheses.
Criterion to optimize
The criterion usually used is to minimize . The motivation behind this criterion is that the function defines a metric. Therefore, solving the regression problem in that case is equivalent to finding the function that lies the closest to Y. But why this particular metric? Simply because it lends itself very nicely to a geometrical interpretation, as we will later see.
Of course, this criterion is only one of the many criterions we can use. For example, we could modify this metric slightly to downweigh observations that are known to have a high variance because we consider them unreliable. This leads to weighted regression.
Choice of an estimator
Under assumptions which are met relatively often, there exists an optimal solution to the linear regression problem. These assumptions are called Gauss-Markov hypothesis. See also Gauss-Markov theorem.
The Gauss-Markov hypothesis
We suppose that and that (uncorrelated, but not necessarily independent) where and I is the identity matrix.
The linear regression problem is then equivalent to an orthogonal projection. It is as if we were considering a set of random variables containing Y and that we projected Y on a subspace of linear functions. This makes the computing of an estimator fairly straightforward. For a proof of this, please refer to least-squares estimation of linear regression coefficients.
Gauss-Markov least-squares estimation of the coefficients
If we suppose all p factors are vectors of same length n, we can build a matrix X:
Supposing this matrix is of full rank, it can be shown (for a proof of this, see least-squares estimation of linear regression coefficients) that a good estimator of the parameters is:
where Xt is the transpose of matrix X, and y is a realization of Y (Y is a random variable i.e. a function and y is the value that Y takes for the experiment under consideration). Based on this data, an estimation of the function η we are looking for is:
Alternatives to Gauss-Markov
The Gauss-Markov estimator is extremely efficient: in fact, the Gauss-Markov theorem states that of all unbiased estimators, depending linearly on Y, of the linear regression coefficients, the least-square ones are the most efficient. Unfortunately, the Gauss-Markov hypothesis are fairly stringent and are often not met in practical cases: departure from the assumptions will corrupt the results quite significantly.
A rather naïve example of this is given on the figure below:
All points lie on a straight line, except one. The regression line is shown in red.
However, robust estimators are a bit fiddly (as one can see here for example, and people tend to overlook the Gauss-Markov hypothesis and brace themselves with the power of Gauss-Markov justifying it with the central limit theorem (for large values of n, the Gauss-Markov assumptions are often met).
If the Gauss-Markov hypotheses are not met, a variety of techniques are available.
- If the error term is not normal but forms an exponential family one can use generalized linear models. Other techniques include the use of weighted least squares or transforming the dependent variable using the Box-Cox transformation.
- If outliers are present the normal distribution can be replaced by a t-distribution or, alternatively, robust regression methods may be used.
- If the predictor is not linear a nonparametric regression or semiparametric regression or nonlinear regression may be used.
Confidence interval for estimation assuming normality, homoscedasticiy, and uncorrelatedness
How much confidence can we have in the values of β we estimated from the data? To answer, we unfortunately need to add hypothesis yet again. Suppose that:
Then we can get the distribution of the least-square estimation of the parameters.
If and then
For , if we name sj the j-th diagonal element of the matrix (XtX) − 1, a 1 − α confidence interval for θj is therefore:
Examples
First example
The following data set gives the average heights and weights for American women aged 30-39 (source: The World Almanac and Book of Facts, 1975).
Height (in) | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 |
Weight (lbs) | 115 | 117 | 120 | 123 | 126 | 129 | 132 | 135 | 139 | 142 | 146 | 150 | 154 | 159 | 164 |
We would like to see how the weight of these women depends on their height. We are therefore looking for a function f such that y = f(x), where y is the weight of the women and x their height. Intuitively, we can guess that if the women's proportions are constant and their density too, then the weight of the women must depend on the cube of their height. A plot of the data set confirms this supposition:
We are therefore looking for coefficients θ0,θ1 and θ2 satisfying as well as possible (in the sense of the Gauss-Markov hypothesis) the equation:
This means we want to project y on the subspace generated by the variables 1,x and x3. The matrix X is constructed simply by putting a column of 1's (the constant term in the model) a column with the original values (the x in the model) and a column with these values cubed (x^3). It can be written:
1 | x | x^3 |
1 | 58 | 195112 |
1 | 59 | 205379 |
1 | 60 | 216000 |
1 | 61 | 226981 |
1 | 62 | 238328 |
1 | 63 | 250047 |
1 | 64 | 262144 |
1 | 65 | 274625 |
1 | 66 | 287496 |
1 | 67 | 300763 |
1 | 68 | 314432 |
1 | 69 | 328509 |
1 | 70 | 343000 |
1 | 71 | 357911 |
1 | 72 | 373248 |
The matrix (XtX) − 1 (sometimes called "information matrix" or "dispertion matrix") is:
Vector is therefore:
Therefore: η(x) = 147 − 1.98x + 4.27 * 10 − 4x3
A plot of this function shows that it lies quite closely to the data set:
The confidence intervals are computed using:
with:
-
- s1 = 1927.3,s2 = 1.033,s3 = 6.37 * 10 − 9
- α = 5%
- s1 = 1927.3,s2 = 1.033,s3 = 6.37 * 10 − 9
Therefore, we can say that with a probability of 0.95,
Second example
We are given a vector of x values and another vector of y values and we are attempting to find a function f such that f(xi) = yi.
- let
Let's assume that our solution is in the family of functions defined by a 3rd degree Fourier expansion written in the form:
- f(x) = a0 / 2 + a1cos(x) + b1sin(x) + a2cos(2x) + b2sin(2x) + a3cos(3x) + b3sin(3x)
where ai,bi are real numbers. This problem can be represented in matrix notation as:
filling this form in with our given values yields a problem in the form Xw = y
This problem can now be posed as an optimization problem to find the minimum sum of squared errors.
solving this with least squares yields:
thus the 3rd-degree Fourier function that fits the data best is given by:
- f(x) = 4.25cos(x) − 6.13cos(2x) + 2.88cos(3x).
See also
- Confidence interval
- Extrapolation
- Kriging
- Prediction
- Prediction interval
- Statistics
- Trend estimation
References
- Audi, R., Ed. (1996) The Cambridge Dictionary of Philosophy. Cambridge, Cambridge University Press. curve fitting problem p.172-173.
- David Birkes and Yadolah Dodge, Alternative Methods of Regression (1993), ISBN 0-471-56881-3
- W. Hardle, Applied Nonparametric Regression (1990), ISBN 0-521-42950-1
- J. Fox, Applied Regression Analysis, Linear Models and Related Methods. (1997), Sage
External links
- Curve Expert shareware
- Zunzun.com Online curve and surface fitting
- Curvefit online ten-point demo
- Curvefit online curve-fitting textbook
- The R Project free software
- SixSigmaFirst software
- TableCurve2D and TableCurve3D by Systat automated regression software
- A step by step example We find linear regression equation, variances, standard errors, coefficients of correlation and determination, confidence interval and etc. by Mazoo's Learning Blog
- Regression of Weakly Correlated Data - a simulation of a common mistake
No comments:
Post a Comment