Statistical Properties of the OLS Coefficient Estimators 1 ...
[Pages:12]ECONOMICS 351* -- NOTE 4
M.G. Abbott
ECON 351* -- NOTE 4
Statistical Properties of the OLS Coefficient Estimators
1. Introduction
We derived in Note 2 the OLS (Ordinary Least Squares) estimators ^ j (j = 0, 1) of
the regression coefficients j (j = 0, 1) in the simple linear regression model given by the population regression equation, or PRE
Yi = 0 + 1Xi + ui
(i = 1, ..., N)
(1)
where ui is an iid random error term. The OLS sample regression equation (SRE) corresponding to PRE (1) is
Yi = ^ 0 + ^1Xi + u^ i
(i = 1, ..., N)
(2)
where ^ 0 and ^1 are the OLS coefficient estimators given by the formulas
^ 1
=
i xiyi
i
x
2 i
(3)
^ 0 = Y - ^1X
(4)
xi Xi - X , yi Yi - Y , X = i Xi N , and Y = i Yi N .
Why Use the OLS Coefficient Estimators?
The reason we use these OLS coefficient estimators is that, under assumptions A1A8 of the classical linear regression model, they have several desirable statistical properties. This note examines these desirable statistical properties of the OLS coefficient estimators primarily in terms of the OLS slope coefficient estimator ^1; the same properties apply to the intercept coefficient estimator ^ 0 .
ECON 351* -- Note 4: Statistical Properties of OLS Estimators
... Page 1 of 12 pages
ECONOMICS 351* -- NOTE 4
M.G. Abbott
2. Statistical Properties of the OLS Slope Coefficient Estimator
? PROPERTY 1: Linearity of ^ 1
The OLS coefficient estimator ^1 can be written as a linear function of the sample values of Y, the Yi (i = 1, ..., N).
Proof: Starts with formula (3) for ^ 1:
^1 =
i xiyi
i
x
2 i
= i xi (Yi - Y)
i
x
2 i
= i xiYi - Y i xi
i
x
2 i
i
x
2 i
=
i xiYi
i
x
2 i
because i xi = 0.
?
Defining the observation weights k i =
xi
i
x
2 i
for i = 1, ..., N, we can re-
write the last expression above for ^1 as:
^1 = i kiYi
where
k i
xi
i
x
2 i
(i = 1, ..., N)
... (P1)
? Note that the formula (3) and the definition of the weights ki imply that ^1 is also a linear function of the yi's such that
^1 = i ki yi .
Result: The OLS slope coefficient estimator ^1 is a linear function of the sample values Yi or yi (i = 1,...,N), where the coefficient of Yi or yi is ki.
ECON 351* -- Note 4: Statistical Properties of OLS Estimators
... Page 2 of 12 pages
ECONOMICS 351* -- NOTE 4
M.G. Abbott
Properties of the Weights ki
In order to establish the remaining properties of ^1, it is necessary to know the arithmetic properties of the weights ki.
[K1] i ki = 0 , i.e., the weights ki sum to zero.
i ki
= i
xi
i
x
2 i
=
1
i
x
2 i
i
xi
=
0,
because i xi = 0.
[K2]
i
k
2 i
=
1
i
x
2 i
.
( ) (( )) i
k
2 i
=
i
x i
i
x
2 i
2
=
i
x
2 i
=
i
x
2 i
2
i
x
2 i
=
i
x
2 i
2
1
i
x
2 i
.
[K3] i k ixi = i k i Xi .
i ki xi = i ki (Xi - X) = i kiXi - Xi ki = i kiXi
since i ki = 0 by [K1] above.
[K4] i k ixi = 1 .
( ) (( )) i
kixi
=
i
x i
i
x
2 i
x
i
=
i
x
2 i
i
x
2 i
=
i
x
2 i
= 1.
i
x
2 i
Implication: i k i Xi = 1.
ECON 351* -- Note 4: Statistical Properties of OLS Estimators
... Page 3 of 12 pages
ECONOMICS 351* -- NOTE 4
M.G. Abbott
? PROPERTY 2: Unbiasedness of ^ 1 and ^ 0 .
The OLS coefficient estimator ^ 1 is unbiased, meaning that E(^ 1 ) = 1 . The OLS coefficient estimator ^ 0 is unbiased, meaning that E(^ 0 ) = 0 .
? Definition of unbiasedness: The coefficient estimator ^ 1 is unbiased if and only if E(^1) = 1; i.e., its mean or expectation is equal to the true coefficient 1.
Proof of unbiasedness of ^ 1 : Start with the formula ^1 = i kiYi .
1. Since assumption A1 states that the PRE is Yi = 0 + 1Xi + ui ,
^1 = i kiYi = i ki (0 + 1Xi + ui ) = 0 i ki + 1 i kiXi + i kiui = 1 + i kiui ,
since Yi = 0 + 1Xi + ui by A1 since i ki = 0 and i kiXi = 1.
2. Now take expectations of the above expression for ^1 , conditional on the
sample values {Xi: i = 1, ..., N} of the regressor X. Conditioning on the sample values of the regressor X means that the ki are treated as nonrandom, since the ki are functions only of the Xi.
E(^1) = E(1) + E[i kiui ] = 1 + i kiE(ui Xi ) = 1 + i ki 0 = 1.
since 1 is a constant and the ki are nonrandom since E(ui Xi ) = 0 by assumption A2
Result: The OLS slope coefficient estimator ^ 1 is an unbiased estimator of the slope coefficient 1: that is,
E(^ 1 ) = 1 .
... (P2)
ECON 351* -- Note 4: Statistical Properties of OLS Estimators
... Page 4 of 12 pages
ECONOMICS 351* -- NOTE 4
M.G. Abbott
Proof of unbiasedness of ^ 0 : Start with the formula ^ 0 = Y - ^1X .
1. Average the PRE Yi = 0 + 1Xi + ui across i:
N
N
N
Yi = N0 + 1 Xi + ui
i=1
i=1
i=1
(sum the PRE over the N observations)
N
N
N
Yi
i=1
N
=
N0 N
Xi
+
1
i=1
N
+
ui
i=1
N
(divide by N)
Y = 0 + 1X + u where Y = iYi N , X = iXi N , and u = iui N .
2. Substitute the above expression for Y into the formula ^ 0 = Y - ^1X :
^ 0 = Y - ^1X = 0 + 1X + u - ^1X = 0 + (1 - ^1)X + u.
since Y = 0 + 1X + u
3. Now take the expectation of ^ 0 conditional on the sample values {Xi: i = 1,
..., N} of the regressor X. Conditioning on the Xi means that X is treated as nonrandom in taking expectations, since X is a function only of the Xi.
[ ] E(^ 0 ) = E(0 ) + E (1 - ^1)X + E(u)
= 0 + X E(1 - ^1) + E(u) since 0 is a constant
= 0 + X E(1 - ^1)
[ ] = 0 + X E(1) - E(^1)
since E(u) = 0 by assumptions A2 and A5
= 0 + X (1 - 1)
since E(1) = 1 and E(^1) = 1
= 0
Result: The OLS intercept coefficient estimator ^ 0 is an unbiased estimator of the intercept coefficient 0: that is,
E(^ 0 ) = 0 .
... (P2)
ECON 351* -- Note 4: Statistical Properties of OLS Estimators
... Page 5 of 12 pages
ECONOMICS 351* -- NOTE 4
M.G. Abbott
? PROPERTY 3: Variance of ^ 1 .
? Definition: The variance of the OLS slope coefficient estimator ^1 is defined as
( ) {[ ] } Var ^1 E ^1 - E(^1) 2 .
? Derivation of Expression for Var(^ 1 ):
1. Since ^1 is an unbiased estimator of 1, E(^1 ) = 1. The variance of ^1 can therefore be written as
{[ ] } ( ) Var ^1 = E ^1 - 1 2 .
2. From part (1) of the unbiasedness proofs above, the term [^1 - 1], which is called the sampling error of ^ 1 , is given by
[ ] ^1 - 1 = i kiui .
3. The square of the sampling error is therefore
[ ] ( ) ^1 - 1 2 = i kiui 2
4. Since the square of a sum is equal to the sum of the squares plus twice the sum of the cross products,
[ ] ( ) ^1 - 1 2 = i kiui 2
N
N
=
k
2 i
u
2 i
+
2 kiksuius.
i=1
i
0
for all i = 1, ..., N;
(A4) Cov(ui , us Xi , Xs ) = E(uius Xi , Xs ) = 0 for all i s.
6. We take expectations conditional on the sample values of the regressor X:
[( ) ] E ^1 - 1 2
N
N
=
k
i2E(u
2 i
Xi )
+
2 kiksE(uius
Xi , Xs )
i=1
i ................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related download
- reliability errors
- optimal calibration in immunoassay and inference on the
- code and standards requirements for acceptance testing
- the use and misuse of the coefficient of variation
- an instructor s guide to understanding test reliability
- college of american pathologists
- establishing acceptance criteria for analytical methods
- variability for categorical variables
- standard error and confidence intervals
- assay validation methods definitions and terms
Related searches
- ied activity 5.1 calculating properties of shapes answer key
- 16.1 properties of logarithms answers
- 16 1 properties of logarithms answers
- 16 1 properties of logarithms answer key
- ied activity 5 1 calculating properties of shapes answer key
- identify and describe the properties of life
- 2 1 properties of matter worksheet
- what are the physical properties of matter
- list of the properties of matter
- the properties of language
- 16 1 properties of logarithms answers book
- lesson 16 1 properties of logarithms