Violations of Classical Linear Regression Assumptions
Violations of Classical Linear Regression Assumptions
Mis-Specification
Assumption 1. Y=Xβ+ε
a. What if the true specification is Y=Xβ+Zγ+ε but we leave out the relevant variable Z?
Then the error in the estimated equation is really the sum Zβ+ε. Multiply the true regression by X’ to get the mis-specified OLS:
X’Y=X’Xβ+X’Zγ+X’ε.
The OLS estimator is b=(X’X)-1X’Y= (X’X)-1X’Xβ+(X’X)-1X’Zγ+(X’X)-1X’ε. The last term is on average going to vanish, so we get b=β+(X’X)-1X’Zγ. Unless γ=0 or in the data, the regression of X on Z is zero, the OLS b is biased.
b. What if the true specification is Y=Xβ+ε but we include the irrelevant variable Z: Y=Xβ+Zγ+(ε-Zγ). The error is ε*=ε−Ζγ. Var(ε*)=var(ε)+γ’var(Z)γ.
The estimator of [β γ]’ is
[pic]
The expected value of this is [pic]. Thus the OLS produces an unbiased estimate of the truth when irrelevant variables are added. However, the standard error of the estimate is enlarged in general by g’Z’Zg/(n-k) (since e*’e*=e’e-2e’Zg+g’Z’Zg). This could easily lead to the conclusion that β=0 when in fact it is not.
c. What if the coefficients change within the sample, so β is not a constant? Suppose that βi=β+Ziγ. Then the proper model is Y=X(β+Zγ)+ε=Xβ+XZγ+ε. Thus we need to include the interaction term XZ. If we do not, then we are in the situation (a) above, and the OLS estimates of the coefficients of X will be biased. On the other hand, if we include the interaction term when it is not really appropriate, the estimators are unbiased but not minimum variance. We can get fooled about the true value of β.
How do you test whether the interactions belong or not. Run an unconstrained regression (which includes interactions) and then run a constrained regression (set interaction coefficients equal to zero). [(SSEconst-SSEunconst)/q]/[SSEunconst/(n-k)]~ Fq,n-k where q=number of interaction terms.
d. Many researchers do a “search” for the proper specification. This can lead to spurious results and we will look at this is some detail in a lecture to follow.
Censored Data and Frontier Regression
Assumption 2. E[ε|X]=0.
Suppose that E[εi |X]=μ≠0. Note: this is the same for all i. b=(X’X)-1X’Y=(X’X)-1X’(Xβ+ε) =β+(X’X)-1X’ε. Thus E[b]=β+μ(X’X)-1X’1. The term (X’X)-1X’1 is the regression of 1 on X, but the first column of X is 1 so the resulting regression coefficients must be [1 0 0…0]’. As a result E[b]=β+[μ 0 0 … 0]’. Only the intercept is biased.
• Now suppose that E[εi|X]=μi but this varies with i. That is, μ≠μ1. By reasoning like the above, E[b]=β+(X’X)-1X’μ The regression of μ on X will in general have non-zero coefficients everywhere and the estimate of b will be biased in all ways.
In particular, what if the data was censored in the sense that only observations of Y that are not too small nor too large are included in the sample: MIN (Yi(MAX. Hence for values of Xi such that Xiβ are very small or vary large, only errors that are high and low respectively will lead to observations in the dataset. This can lead to the type of bias discussed above for all the coefficients, not just the intercept. See the graph below where the slope is also biased.
[pic]
Frontier Regression: Stochastic Frontier Analysis[1]
Cost Regression: Ci=a + bQi + εi + φi
The term a+bQ+ε represents the minimum cost measured with a slight measurement error ε. Given this, the actual costs must be above the minimum so the inefficiency term φ must be positive. Suppose that φ has an exponential distribution:
f(φ)=e-φ/λ/λ for φ(0.
[Note: E[φ]=λ and Var[φ]=λ2.] Suppose that the measurement error ε~N(0,σ2) and is independent of the inefficiency φ. The joint probability of ε and φ is
[pic]. Let the total error be denoted θ=ε+φ. [Note: E[θ]=λ and Var[θ]=σ2+λ2.] Then the joint probability of the inefficiency and total error is [pic]. The marginal distribution of the total error is found by integrating the f(θ,φ) with respect to φ over the range [0,(). Using “complete-the-square” this can be seen to equal
[pic], where Φ is the cumulative standard normal.
To fit the model to n data-points, we would select a, b , λ and σ to maximize log-likelihood:
[pic]
Once we have estimated the parameters, we can measure the amount of inefficiency for each observation, φi. The conditional pdf f(φi|θi) is computed for θi=Ci-a-bQi:
[pic]. This is a half-normal distribution and has a mode of θi-σ2/λ, assuming this is positive. The degree of cost inefficiency is defined as IEi=[pic]; this is a number greater than 1, and the bigger it is the more inefficiently large is the cost. Of course, we do not know φi, but if we evaluate IEi at the posterior mode θi-σ2/λ it equals IEi ([pic]. Note that the term σ2/λ captures the idea that we do not precisely know what the minimum cost equals, so we slightly discount the measured cost to account for our uncertainty about the frontier.
Non-Spherical Errors
Assumption 3. var(Y|X)=var(ε|X)=σ2 I
Suppose that var(ε|X)= σ2 W, where W is a symmetric, positive definite matrix but W≠I. What are the consequences for OLS?
a. E[b]=E[(X’X)-1X’(Xβ+ε)]=β+(X’X)-1X’E[ε] = β, so OLS is still unbiased even if W≠I.
b. Var[b]=E[(b-β)(b-β)’]=(X’X)-1X’E[εε’]X(X’X)-1=σ2(X’X)-1X’WX(X’X)-1≠σ2(X’X)-1
Hence, the OLS computed standard errors and t-stats are wrong. The OLS estimator will not be BLUE.
Generalized Least-Squares
Suppose we find a matrix P (n(n) such that PWP’=I, or equivalently W=P-1P’-1 or W-1=P’P (use spectral demcomposition). Multiply the regression model (Y=Xβ+ε) on left by P: PY=PXβ+Pε. Write PY=Y*, PX=X* and Pε=ε*, so in the transformed variables Y*=X*β+ε*. Why do this? Look at the variance of ε*: Var(ε*)=E[ε*ε*’]=E[Pεε’P’]=PE[εε’]P’=σ2PWP’=σ2 I. The error ε* is spherical; that’s why.
GLS estimator: b*=(X*’X*)-1X*’Y*=(X’P’PX)-1X’P’PY=(X’W-1X)-1X’W-1Y.
Analysis of the transformed data equation says that GLS b* is BLUE. So it has lower variance that the OLS b.
Var[b*]=σ2(X*’X*)-1= σ2(X’W-1X)-1
How do we estimate σ2? [Note: from OLS E[e’e]/(n-k)=E[ε’Mε]/(n-k)=E[tr(ε’Mε)]/(n-k)=E[tr(Mεε’)]/(n-k) =tr(ME[εε’])/(n-k)=σ2tr(MW)/(n-k). Since W≠I, tr(MW)≠n-k, so E[e’e]/(n-k) ≠σ2.] Hence, to estimate σ2 we need to use the errors from the transformed equation Y*=X*b*+e*.
s*2=(e*’e*)/(n-k)
E[s*2]=tr(M*E[ε*ε*’])/(n-k)= σ2tr(M*PWP’)/(n-k)= σ2tr(M*)/(n-k)=σ2. Hence s*2 is an unbiased estimator of σ2.
Important Note: all of the above assumes that W is known and that it can be factored into P-1P’-1. How do we know W? Two special cases are autocorrelation and heteroskedasticity.
Autocorrelated Errors
Suppose that Yt=Xtβ+ut (notice the subscript t denotes time since this problem occurs most frequently with time-series data). Instead of assuming that the errors ut are iid, let us assume they are autocorrelated (also called serially correlated errors) according to the lagged formula
ut=ρut-1+εt,
where εt is iid. Successively lagging and substituting for ut gives the equivalent formula
ut=εt+ρεt-1+ρ2εt-2+…
Using this, we can see that E[utut]=σ2(1+ρ2+ρ4+…)=σ2/(1-ρ2), E[utut-1]=ρ σ2/(1-ρ2),
E[utut-2]=ρ2 σ2/(1-ρ2), … E[utut-m]=ρm σ2/(1-ρ2). Therefore, the variance matrix of u is
var(u)=E[uu’] =[pic]=σ2W,
where [pic]
and [pic]
It is possible to show that W-1 can be factored into P’P where
[pic].
Given this P, the transformed data for GLS is
[pic]
Notice that only the first element is unique. The rest just involves subtracting a fraction ρ of the lagged value from the current value. Many modelers drop the first observation and use only the last n-1 because it is easier, but this throws away information and I would not recommend doing it unless you had a very large n. The Cochrane-Orcutt technique successively estimates of ρ from the errors and re-estimating based upon new transformed data (Y*,X*).
1. Guess a starting ρ0.
2. At stage m, estimate β in model Yt-ρmYt-1=(Xt-ρmXt-1)β+εt using OLS. If the estimate bm is not different from the previous bm-1, then stop. Otherwise, compute error vector em=(Y*-X*bm).
3. Estimate ρ in emt=ρem,t-1+εt via OLS. This estimate becomes the new ρm+1. Go back to 2.
Durbin-Watson test for ρ≠0 in ut=ρut-1+εt.
1. Compute OLS errors e.
2. Calculate [pic].
3. d0, d>2 ( ρ ................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related download
- dilution calibration curves linear range and linear
- violations of classical linear regression assumptions
- weighted multiple linear regression program wreg version
- federal communications commission
- chapter 10 building the regression model i i remedial
- pe ratio regression nyu
- optimal designs for binomial and multinomial regressions
- chapter 1 linear regression with 1 predictor
Related searches
- simple linear regression test statistic
- linear regression coefficients significance
- linear regression test statistic calculator
- linear regression without a calculator
- linear regression significance
- linear regression coefficient formula
- linear regression significance test
- linear regression slope significance testing
- linear regression statistical significance
- linear regression hypothesis example
- simple linear regression hypothesis testing
- simple linear regression null hypothesis