8 - Forsiden - Universitetet i Oslo



Regression with auto-correlated disturbances

1) Introduction

2) Properties of least square estimators when the disturbances are auto-correlated

3) The Durbin-Watson test for auto-correlation

4) Generalized least squares

1. Introduction

In the last section we investigated consequences for the least square estimator that the disturbances [pic] had varying variances, but otherwise satisfied the classical standard conditions. Now, we consider a problem we often encounter when analysing time series data, namely that the random disturbances are correlated. We already know that the disturbances also take care of the numerous factors which influence the endogenous variable, but for various reasons have not been explicitly specified in the regression equation. It is evident that some of these factors may show a definite temporal pattern, that is to say they are correlated. Since the disturbances summarize these factors we intuitively realize that in many situations this will have as a consequence that the disturbances in the regression equation will be correlated. Since this is a breach of one of the classical assumptions in least square regression we expect that some of the properties we have learned the least square estimator to have, will not hold in this situation.

2. Properties of least square estimators when the disturbances are auto-correlated.

Since our concern is consequences of auto-correlated disturbances we can as well consider a regression with only one independent variable. That is to say, we specify the regression equation:

(8.2.1) [pic]

where [pic] denotes the size of the sample.

Regarding the auto-correlated disturbances [pic] there are two popular specifications in econometrics:

(8.2.2) [pic]

(8.2.3) [pic][pic]

In case (8.2.2) the random process [pic] is said to be a first-order auto-regressive process, and is usually denoted by the obvious symbols[pic] process. The second process [pic] is called a moving average process, and (8.2.3) denotes a [pic] process. The number [pic] appearing in these denotations is meant to tell us that the processes [pic] are specified by using one lag only [pic]. We also note that the process [pic] appearing in these specifications are supposed to be a purely random process with mean zero and variance [pic]. In the present course we always assume that the disturbances [pic] are [pic] processes. We also note that the condition [pic] is important. It means that process specified by (8.2.2) is a stationary process, which implies that the means of [pic] is a constant independent of time [pic], and that the covariance of [pic] depends only on the time-lag [pic] and not on the time [pic]. These properties are easily derived by using the recursion (8.2.2) to solve for [pic] with respect to white noise process [pic]. However, the mean and variance of [pic] and the various covariance of [pic], can be derived from specification (8.2.2). Using the standard formula for calculating the mean, we obtain from (8.2.2) that

(8.2.4) [pic]

Since [pic] describes a stationary process it follows from what we have said above that [pic] so that (8.2.4) reduces to

(8.2.5) [pic] since [pic]

From (8.2.5) it follows that [pic].

The variance of [pic]is derived in a similar way. From the specification (8.2.2) we obtain

6) [pic]

Since [pic] are uncorrelated (why?) the last term of (8.2.6) will vanish. Hence, (8.2.6) reduces to

(8.2.7) [pic]

(8.2.8) [pic]

Calculating the covariances between [pic], and generally between [pic], we use again the recursion (8.2.2). Obviously we have

(8.2.9) [pic]

(8.2.10) [pic]

Proceeding in this way we will find that generally

(8.2.11) [pic]

In time-series analysis the covariances calculated above will often be called auto-covariances. If we normalize the auto-covariances by the variance we obtain the auto-correlations [pic], so we realize that

(8.2.12) [pic]

The interesting question is now if the auto-correlated disturbances of [pic] have any consequences for the least square estimators of the regression parameters [pic] appearing in regression (8.2.1)? We know already that The OLS estimators of these parameters are given by

(8.2.13) [pic]

(8.2.14) [pic]

Using the independence of [pic]of the exogenous variable [pic], we readily derive that

(8.2.15) [pic]

From (8.2.14) we also find that

(8.2.16) [pic]

where [pic]

When the number of observations [pic] grows the sum [pic] will become infinitely large, which implies that the right hand side of (8.2.16) will tend to zero. Thus, the estimator [pic] will converge to [pic] in probability and we say that [pic] is a consistent estimator. By similar reasoning we can show that also [pic] is a consistent estimator.

Combining these facts with (8.2.15) we conclude that the OLS estimators [pic] are unbiased and consistent even though the disturbance process [pic]are auto-correlated. So what goes wrong in this situation? Well, we observe from (8.2.16) that auto-correlation will change the expression for the variance of [pic]. Note that when the error process [pic] is purely random [pic] and (8.2.16) will reduce to the standard expression for the variance of [pic]. From this we understand that the standard t and F tests and the standard procedures for calculating confidence intervals are not valid when the disturbances are auto-correlated.

3. The Durbin-Watson test for auto-correlation

We noted above that auto-correlation in the disturbance process can take several patterns. The auto-regressive and moving average patterns are perhaps the more common, but the error process can also have a combinations of these two forms. Since the prevalence of auto-correlation can cause serious problems for applied statistical analyses, we should like to have reliable tests designed to expose this problem. A finite sample test derived for this purpose is the so-called Durbin-Watson test. The original D-W test is constructed to disclose the existence of a simple auto-regressive disturbance process, i.e. to disclose if the error process [pic] has an [pic] form. The specific model under consideration is

(8.3.1) [pic]

(8.3.2) [pic]

As the null hypothesis the D-W test uses the hypothesis of no auto-correlation so that [pic] and in addition that the purely error term [pic]

This hypothesis can be tested against the alternatives [pic]

The construction of D-W test is simple and very intuitive. One starts by regressing [pic] as indicated by (8.3.1). Having obtained the estimates [pic] we calculate the residuals

(8.3.3) [pic]

Then the D-W test is based on the test statistic

(8.3.4) [pic]

We readily see that we have approximately

(8.3.5) [pic]

where [pic] is the OLS estimate of [pic] obtained from the ‘regression’ (8.3.2) or

(8.3.6) [pic]

We observe that [pic] is almost equal to the empirical correlation coefficient [pic] since

(8.3.7) [pic]

Hence, we can also write

(8.3.7) [pic]

The value of the test statistic [pic] depends on the observations of the exogenous variable [pic] and the values of [pic]. However, Durbin and Watson showed that, for given values of [pic], [pic] is necessarily contained between two limits [pic] which are independent of the values of [pic] and are functions only of the number of observations [pic]and the number of the exogenous variables [pic] so that

(8.3.8) [pic]

The limits [pic] are random variables whose distribution can be determined for each pair of [pic] under given assumptions on the distribution of [pic]. We note above that under the null hypothesis [pic]. The distributions of [pic] under the null hypothesis has been tabulated by Durbin and Watson.

Since the correlation coefficient [pic] is restricted to the interval [pic] we observe from (8.3.7) that [pic] is restricted (approximately) to the interval [pic]. This fact provides us with useful guidelines for when we shall reject the null hypothesis when testing against the various alternative hypotheses.

Suppose we wish to test the null hypothesis: [pic]

Since [pic] is approximately equal to [pic], it follows from (8.3.7) that we have every reason to be doubtful to the null hypothesis if the calculated value of the test statistic [pic] is in the neighbourhood of zero. But to reject or not to reject the null hypothesis has to be decided on basis of the distributions of two bound [pic]. The usual test procedure recommends us first to choose the level of significance [pic], to this level of significance we determine the relevant fractile values from the distributions of the two bounds so that so that [pic] . The decision process is now:

If [pic]

If [pic], the statistical material is indeterminate.

If [pic]

In a similar way we can test [pic]

Most textbooks give tables for the distributions of two bounds, for example Hill et al. Table 5.

4. Generalized least square

When the random disturbances [pic] follow an [pic] we have seen that the disturbances are correlated ((8.2.9) – (8.2.11)). This is a breach of the classical conditions underpinning the ordinary least square method. Although OLS gives us unbiased and consistent estimators when the disturbances are auto-correlated, it is evidently possible to find better and more convenient estimating methods. Generalized least squares is such a method which we will illustrate in this section by applying it to the regression model specified by (8.2.1) and (8.2.2). If we shift the time index one period backwards and multiply by [pic] the regression (8.2.1) is transformed to

(8.4.1) [pic]

Subtracting (8.4.1) from (8.2.1) we attain

(8.4.2) [pic]

If we define the variables

(8.4.3) [pic]

we can write (8.4.2) as

(8.4.4) [pic]

If we know [pic], [pic] are observable and (8.4.4) turns out to be an ordinary linear regression with a disturbance [pic] satisfying all classical conditions. However, it follows from the definition of [pic] that time index has to run from [pic], so restricting our analysis to (8.4.4) we in effect loose one observation and hence some efficiency in the estimation. But the ‘good’ situation is easily recovered. For the first observation the regression satisfies

(8.4.5) [pic]

The variance of the [pic] where, of course, [pic] is the variance of [pic]. Hence, if we multiply the first observation by [pic], that is

(8.4.6) [pic]

we observe that the variance of the random error appearing in (8.4.6) is simply [pic]. So if we supplement regression (8.4.4) with (8.4.6) as the first observation, we get an extended regression which uses all [pic] observations, has independent and homoskedastic disturbances. We observe that the this extended regression now has two explanatory variables [pic], the vector of the second explanatory variable is, of course, [pic].

The method we have described above is in effect an application of generalized least square. Generalized least square will always involve some kind of transformation of the observable variables. Above we have tacitly assumed that the parameter [pic], usually it is not. When [pic] is not known it has to be estimated in some way in order to be able to use the transformations above. An approach often used is to start by running the regression (8.2.1) and then calculate the residuals

(8.4.7) [pic]

Then one can estimate [pic] by running the regression of [pic] (of course without an intercept term). Having obtained an estimate [pic] one proceeds as above.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download