Regressions with two explanatory variables



4 Regressions with two explanatory variables.

This chapter contains the following sections:

(4.1) Introduction.

(4.2) The gross versus the partial effect of an explanatory variable.

(4.3) [pic] and adjusted [pic].

4) Hypothesis testing in multiple regression.

4.1 Introduction

In the proceeding sections we have studied in some detail regressions containing only one explanatory variable. In econometrics with its multitude of dependencies, the simple regression can only be a showcase. As such it is important and powerful since the methods applied to this case, carry over to multiple regression with only small and obvious modifications. However, some important new problems arise which we point out in this chapter.

As usual we start with the linear form:

(4.1.1) [pic],

where we again assume that the explanatory variables[pic]and[pic]are deterministic and also that the random disturbances[pic]have the standard properties:

(4.1.2) [pic]

(4.1.3) [pic]

(4.1.4) [pic] [pic]

The coefficient [pic] is the intercept coefficient, [pic] is the slope coefficient of [pic], showing the effect on Y of a unit change in[pic], holding [pic]constant or controlling for[pic]. Another phrase frequently used is that[pic] is the partial effect on Y of[pic], holding [pic] fixed. The interpretation of [pic] is similar, except that [pic] and [pic] change rolls.

Since the random disturbances are homoskedastic (i.e. they have the same variances), the scene is prepared for least square regression. Hence, for arbitrary values of the structural parameters [pic], [pic] and[pic] the sum of squared residuals are given by:

(4.1.5) [pic]

Minimizing [pic] give the OLS estimators of [pic]This is a simple optimization problem and we can state the OLS estimators directly:

(4.1.5) [pic]

(4.1.7) [pic]

(4.1.8) [pic]

(4.1.9) [pic]

In these formulas we have used the common notation:

(4.1.10) [pic] , [pic]

We have also used the useful formulas:

(4.1.11) [pic]

(4.1.12) [pic]

where [pic]

You should check for yourself that these formulas hold.

You should also check for yourself that the OLS estimators [pic],[pic], and[pic] are unbiased, i.e.

(4.1.13) [pic]

(4.1.14) [pic]

(4.1.15) [pic]

The variances and covariances of these OLS estimators are readily deduced:

(4.1.16) [pic]

(4.1.17) [pic]

(4.1.18) [pic]

Using:

(4.1.19) [pic]

The formulas for [pic] and [pic] can be rewritten

(4.1.20) [pic]

(4.1.21) [pic]

Similar calculation show that:

(4.1.22) [pic]

Evidently, many results derived in the simple regression have immediate extension to multiple regression. But some new points enter of which we have to be aware.

4.2 The gross versus the partial effect of an explanatory variable

In order to get insight in this problem, it is enough to start with the multiple regression (4.1.1). Hence, we specify:

(4.2.1) [pic]

To facilitate the discussion we assume, for the time being, that also the explanatory variables [pic] and [pic] are random variables.

In this situation the above regression is by no means, the only regression we might think of. For instance, might also consider:

(4.2.2) [pic]

or

(4.2.3) [pic]

where [pic] and [pic] are the random disturbances in the regressions of Y on [pic] and [pic]

In the regressions (4.2.1) and (4.2.2) [pic] and [pic] show the impact of [pic] on Y, but obviously [pic] and [pic] can be quite different. In (4.2.1) [pic] shows the effect of [pic] when we control for[pic]. Hence, [pic] shows the partial or net effect of [pic] on Y. In contrast, [pic] shows the effect of [pic] when we do not control for[pic]. Intuitively, when [pic] is excluded from the regression, [pic]will absorb some of the effect of [pic] on Y since there will normally be a correlation between [pic] and[pic]. Therefore, we call [pic] the gross effect of [pic]. Of course, similar arguments apply to the comparison of [pic] and [pic] in the regression (4.2.1) and(4.2.3). As econometricians we often wish to evaluate the effect of an explanatory variable on the dependent variable. Since the gross and partial impacts can be quite different, we understand that we have to be cautious and not too bombastic when we interpret the regression parameters. At the same time it is evident that this is a serious problem in any statistical application in the social science.

A comprehensive Danish investigation studied the relation between mortality and jogging. One compared the mortality for two groups, one consisted of regular joggers and the other of people not jogging. The research worker found that the mortality rate in the group of regular joggers was considerably lower compared to the group consisting of non-joggers. This result may be reasonable and expected, but was the picture that simple? A closer study showed that the joggers were better educated, smoked less and almost nobody in this group had weight problems. Could these factors help to explain the lower mortality rate for the joggers? A further study of this sample showed that although these factors had systematic influences on the mortality rate, the jogging activity still reduced the mortality rate.

In econometrics or in the social sciences in general similar considerations relate to almost any applied work. Therefore, we should like to shed some specific light on the relations between the gross and partial effects. Intuitively, we understand that the root of this problem is due to the correlation between the explanatory variables, in our model the correlation between [pic] and [pic]. So when [pic] is excluded from the regression, some of the influence of [pic] on [pic] is captured by [pic].

In order to be concise, let us assume that

(4.2.4) [pic]

where [pic] denotes the disturbances term in this regression. ((4.2.4) shows why it is convenient to assume that [pic] are random variables in this illustration).

Using (4.2.4) to substitute for [pic] in (4.2.1) we attain:

(4.2.5) [pic]

Comparing this equation (4.2.2) we attain:

(4.2.6) [pic]

(4.2.7) [pic]

(4.2.8) [pic]

We observe immediately that if there is no linear relation between [pic], then the gross effect [pic] coincides with the partial (net) effect [pic] since in this case [pic] (see (4.2.8)).

The intercept term [pic] will be a mixture of intercepts [pic] and [pic] and the partial effect of [pic], namely[pic].

Equations (4.2.7) - (4.2.8) show the relations between the structural parameters, we have still to show that OLS estimators confirm these relations. However, they do!

Let us verify this fact for [pic] (see (4.2.8)). We know from (4.1.7) and (4.1.8) that:

(4.2.9) [pic]

(4.2.10) [pic]

Where [pic] and [pic]

From section (4.2) we realize that:

(4.2.11) [pic]

(4.2.12) [pic]

where [pic] are obtained by regressing [pic] on [pic]

Similarly, by regressing [pic] (4.2.4) we attain:

(4.2.13) [pic]

(4.2.14) [pic]

Piecing the various equations together we obtain:

(4.2.15) [pic]

[pic]

Thus, we have confirmed that:

(4.2.16) [pic]

so that the OLS estimators satisfy (4.2.8). In a similar way we can show that:

(4.2.17) [pic]

verifying (4.2.9).

We also observe that [pic] implies that [pic](see (4.2.14)). By (4.2.16) in this case we have that [pic].

Therefore, if[pic] and[pic] are uncorrelated, then the gross and partial effects of [pic] coincide. The specification issue treated in this section is important and interesting, but at the same time challenging capable of eroding any econometric specification. Many textbooks treat it under the heading “Omission variable bias”. In my opinion treating this as a bias problem is not the proper approach. In order to substantiate this view, we take (4.1.1) as the starting point but now assuming that [pic] are random. Excluding details we simply assume that the conditional expectation of [pic] can be written:

(4.2.18) [pic]

If our model is incomplete in that [pic] has not been included in the specification, then, evidently the expression the conditional expectation [pic] can be written:

(4.2.19) [pic]

In general [pic] can be an arbitrary function of [pic]. However, if we stick to the linearity assumption by assuming:

(4.2.20) [pic]

equation (4.2.19) will lead to the regression function:

(4.2.21) [pic]

where [pic] are expressed by (4.2.7) and (4.2.8).

The point of this lesson is that (4.2.18) and (4.2.21) are simply two different regression functions, but on their own perfectly all right regressions. To say that [pic] is in any respect biased is simply misuse of language.

4.3 [pic] and the adjusted [pic]

In section (2.3) we defined the coefficient of determination [pic]. We remember:

(4.3.1) [pic]

where: Explained sum of squares is equal to: [pic]. Total sum of squares is equal to: [pic]. Sum of squared residuals is equal to: [pic]

Since [pic] never decreases when a new variable is added to a regression, an increase in [pic] does not imply that adding a new variable actually improves the fit of the model. In this sense the [pic] gives an inflated estimate of how well the regression fits the data. One way to correct for this is to deflate [pic] by a certain factor. The outcome will be the so called adjusted [pic], denoted [pic].

The [pic] is a modified version of [pic] that does not necessarily increase when a new variable is added to the regression equation. [pic] is defined by:

(4.3.2) [pic]

N-the number of observations

k-the number of explanatory variables

There are a few things to be noted about [pic]. The ratio [pic] is always larger than 1, so that [pic] is always less than [pic]. Adding a new variable has two opposite effects on [pic]. On the one hand, the SSR falls which increases[pic]. On the other hand the factor [pic] increases. Whether [pic] increases or decreases, depends on which of the effects are the stronger. Thirdly, an increase in [pic] does not necessarily mean that the coefficient of the added variable is statistically significant. To find out if an added variable is statistically significant, we have to perform a statistical test for example a t-test. Finally, a high [pic] does not necessarily mean that we specified the most appropriate set of explanatory variables. Specifying econometric models are difficult. We face observability and data problems around any concern, but, in general, we ought to remember that the specified model should have a sound basis in economic theory.

4.4 Hypothesis testing in multiple regression

We have seen above that adding a second explanatory variable to a regression did not demand any new principle as regard estimation. OLS estimators could be derived by an immediate extension of the “one explanatory variable” case. Much of the same can be said about hypothesis testing. We can, therefore, as well start with a multiple regression containing k explanatory variables. Hence we specify:

(4.4.1) [pic]

where, as usual, [pic]denotes the random disturbances.

Suppose we wish to test a simple hypothesis on one of the slope coefficients, for example [pic]. Hence, suppose we wish to test:

(4.4.2) [pic]

In chapter 3.1 we showed in detail the relevant procedures for testing [pic] against these alternatives in the simple regression. Similar procedures can be applied in this case. We start with test statistic

(4.4.3) [pic] ~ t-distributed with [pic] degrees of freedom when [pic] is true.

In (4.4.1) [pic] denotes the OLS estimator and [pic] is an estimator of the standard deviation of [pic].

In the general case (4.4.1) we only have to remember that in order to get an unbiased estimator of [pic] (the variance of the disturbances) we have to divide by [pic]. Note, in the simple regression [pic]so that [pic]reduces to (N-2). By similar arguments we deduce that the test statistic T given by (4.4.3) has a t-distribution with (N-k-1) degrees of freedom when the null hypothesis is true. With this modification we can follow the procedures described in chapter (3.1). Hence, we can apply the simple t-tests but we have to choose the appropriate number of degrees of freedom in the t-distribution; remember that fact!

The t-tests are not restricted to testing simple hypothesis on the intercept or the various slope parameters, t-tests can also be used to test hypothesis involving linear combinations of the regression coefficients.

For instance, if we wish to test the null hypothesis:

(4.4.4) [pic]

We realize that this can be done with an ordinary t-test. The point is that these hypotheses are equivalent to the hypotheses:

(4.4.5) [pic]

So that if we reject [pic] we should also reject [pic], etc.

In order to test [pic] against [pic] ,we use the test statistic:

(4.4.6) [pic]

When [pic] is true, T will have a t-distribution with (N-k-1) degrees of freedom. Note that [pic] can be estimated by the formulas:

(4.4.7) [pic]

where

(4.4.8) [pic]

When estimates of[pic] and [pic] and [pic]are available, we can easily compute the value of the test statistic T. After that we continue as with the usual t-tests.

Although, the t-tests are not solely restricted to the simple situations, we will quickly face test situations that these tests can not handle. As an example we consider model from labor market economics: suppose that wages [pic]depend on workers’ education [pic]and experience [pic]. In order to investigate the dependency of [pic] on [pic] and [pic], we specify the regression:

(4.4.9) [pic]

where [pic] denote the usual disturbance terms.

Note that the presence of the quadratic term [pic] does not create problems for estimating the regression coefficient. It is hardly a small hitch. We only have to define the new variable:

(4.4.10) [pic]

The regression (4.4.9) becomes:

(4.4.11) [pic]

Suppose now that we are uncertain whether workers’ experience [pic] has any effect on the wages [pic]. In order settle this issue we have to test a joint null hypothesis, namely:

(4.4.12) [pic]

In this case the null hypothesis restricts the value of two of the coefficient, so as a matter of terminology we can say that the null hypothesis in (4.4.12) imposes two restrictions on the multiple regression model; namely [pic]. In general, a joint hypothesis is a hypothesis which imposes two or more restrictions on the regression coefficients.

It might be tempting to think that we could test the joint hypothesis (4.4.12) by using the usual t-statistics to test the restriction one at a time. But this testing procedure will be very unreliable. Luckily, there exist test procedures which manage to handle joint hypothesis on the regression coefficients.

So, how can we proceed to test the joint hypothesis (4.4.12)? If the null hypothesis is true, the regression (4.4.11) becomes:

(4.4.13) [pic]

Obviously, we have to investigate two regressions, the one given by (4.4.11) and the other one given by (4.4.13). Since there are no restrictions on (4.4.11) it is called the unrestricted form, while (4.4.13) is called the restricted form of the regression. It is very natural to base a test of the joint null hypothesis (4.4.12) on the sum of squared residuals resulting from these two regression. If [pic] denotes this sum of squared residuals obtained from (4.4.13) and [pic] dente that obtained from (4.4.11), we will be doubtful about the truth of [pic] if [pic] is considerably greater than [pic]. If [pic] is only slightly larger than [pic] there is no reason to be doubtful about [pic].

Since [pic] stems from the restricted regression (4.4.13), we obviously have:

(4.4.14) [pic]

In order to test joint hypotheses on the regression coefficients the standard approach is to use a so-called F-test. In our present example this test is very intuitive. In the general case it is based on the test-statistic:

(4.4.15) [pic]

where r denotes the number of restrictions and k denotes the number of explanatory variables.

In our example above: r =2 and k =3.

If [pic]is true, then F has a so-called Fisher distribution with (r, N-k-1) degrees of freedom. The numerator has r degrees of freedom, and the denominator (N-k-1).

From (4.4.14) it is obvious that the test-statistic F is concentrated on the positive axis. Small values of F indicate that [pic] is agreeable with the sample data.

The principal approach to test a joint null hypothesis [pic]against an alternative [pic]proceeds as usual:

i) Choose a suitable test-statistic

ii) Choose a level of significance [pic]

iii) When [pic]is true, the test-statistic[pic]will have a Fisher-distribution with r and (N-k-1) degrees of freedom.

iv) The critical value [pic] in this distribution is determined from the equation:

(4.4.16) [pic]

v) When the regressions have been performed, we compute the value of the test-statistic[pic].

vi) Decision rule: (a) Reject [pic]if [pic] (b) Do not reject [pic]if[pic].

Fig (4.4.1)!!

(4.4.17) [pic] reject [pic]

(4.4.18) [pic] do not reject [pic]

Applying this test to our null hypothesis (4.4.12), gave the following result:

(4.4.19) [pic]

[pic]

(4.4.20) [pic]

[pic]

[pic]

(4.4.21) [pic]

[pic]

The critical value [pic] is determined:

(4.4.22) [pic]

Tables of the F-distribution show that [pic]

Since [pic], there is no reason to reject [pic]. Workers’ experience does not seem to have an impact on workers’ wages in this sample. We can also compute the P-value for this test in the same way we have learned above. We observe:

(4.4.23) [pic]

A joint hypothesis which at times might interest us all, is to find out if the explanatory variables have an impact at all on the dependent variable. Thus, referring to (4.4.1) we wish to test the null hypothesis:

(4.4.24) [pic]

against [pic]: at least one [pic] [pic]

[pic] is computed from the unrestricted regression (4.4.1) , while [pic] is computed from the restricted regression:

(4.4.25) [pic]

We observe immediately that in the restricted model [pic] is estimated by:

(4.4.26) [pic] (implying [pic])

So that:

(4.4.27) [pic]

The numerator in the test statistic F becomes:

(4.4.28) [pic]

Hence, the test statistic [pic] reduces to:

(4.4.29) [pic]= [pic]

= [pic]

(Using the deduction in section (4.3)).

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download