Dummy Dependent Variables Models



Adnan Kasman

Dept. of Economics,

Faculty of Business,

Dokuz Eylul University

Course: Econometrics II

Dummy Dependent Variables Models

In this chapter we introduce models that are designed to deal with situations in which our dependent variable is a dummy variable. That is, it assumes either the value 0 or the value 1. Such models are very useful in that they allow us to address questions for which there is a “yes or no” answer.

1. Linear Probability Model

In the case of dummy dependent variable model we have:

[pic]

where [pic] or 1 and [pic].

What would happen if we simply estimated the slope coefficients of this model using OLS? What would the coefficients mean? Would they be unbiased? Are they efficient?

A regression model in the situation where the dependent variable takes on the two values 0 or 1 is called a linear probability model. To see its properties note the following.

a) Since the mean error is zero, we know that [pic].

b) Now, if we define [pic]and [pic], then [pic]. Therefore, our model is [pic] and the estimated slope coefficients would tell us the impact of a unit change in that explanatory variable on the probability that [pic]

c) The predicted values from the regression model [pic] would provide predictions, based on some chosen values for the explanatory variables, for the probability that [pic]. There is, however, nothing in the estimation strategy that would constrain the resulting predictions from being negative or larger than 1-clearly an unfortunate characteristic of the approach.

d) Since [pic] and uncorrelated with the explanatory variables (by assumption), it is easy to show that the OLS estimators are unbiased. The errors, however, are heteroscedastic. A simple way to see this is to consider an example. Suppose that the dependent variable takes the value 1 if the individual buys a Rolex watch and 0 other wise. Also, suppose the explanatory variable is income. For low level of income it is likely that all of the observations are zeros. In this case, there would be no scatter around the line. For higher levels of income there would be some zeros and some ones. That is, there would be some scatter around the line. Thus, the errors would be heteroscedastic. This suggests two empirical strategies. First, we know that the OLS estimators are unbiased but would yield the incorrect standard errors. We might simply use OLS and then use the White correction to produce correct standard errors.

2. Logit and Probit Models

One potential criticism of the linear probability model (beyond those mentioned above) is that the model assumes that the probability that [pic] is linearly related to the explanatory variable(s). We might, however, expect the relation to be nonlinear. For example, increasing the income of the very poor or the very rich will probably have little effect on whether they buy an automobile. It could, however, have a nonzero effect on other income groups.

Two models that are nonlinear, yet provide predicted probabilities between 0 and 1, are the logit and probit models. The difference between the linear probability model and the nonlinear logit and probit models can be explained using an example. To motivate these models, suppose that our underlying dummy dependent variable depends on an unobserved (“latent”) utility index [pic]. For example, if the variable y is discrete, taking on the values 0 and 1 if someone buys a car, then we can imagine a continuous variable[pic] that reflects a person’s desire to buy the car. It seems reasonable that [pic] would vary continuously with some explanatory variable like income. More formally, suppose

[pic]

and

[pic] (i.e., the utility index is “high enough”)

[pic] (i.e., the utility index is not “high enough”)

Then:

[pic]

Given this, our basic problem is selecting F – the cumulative density function for the error term. It is here where the logit and probit models differ. As a practical matter, we are likely interested in estimating the [pic]’s in the model. This is typically done using a Maximum Likelihood Estimator (MLE). To outline the MLE in this context, recognize that each outcome [pic] has the density function [pic]. That is, each [pic] takes on either the value of 0 or 1 with probability [pic] and [pic]. Then the likelihood function is:

[pic]

and

[pic]

which, given [pic], becomes

[pic]

Analytically, the next step would be to take the partial derivatives of the likelihood function with respect to the [pic]’s, set them equal to zero, and solve for the MLEs. This could be a very messy calculation depending on the functional form of F. In practice, the computer will solve this problem for us.

2.1. Logit Model

For the logit model we specify

[pic]

It can be seen that [pic]. Similarly, [pic]. Thus, unlike the linear probability model, probabilities from the logit will be between 0 and 1. A complication arises in interpreting the estimated [pic]’s. In the case of a linear probability model, a b measures the ceteris paribus effect of a change in the explanatory variable on the probability y equals 1. In the logit model we can see that

[pic]

Notice that the derivative is nonlinear and depends on the value of x. It is common to evaluate the derivative at the mean of x so that a single derivative can be presented.

Odds Ratio

[pic]

For ease of exposition, we write above equation as [pic] where [pic]. To avoid the possibility that the predicted values might be outside the probability interval of 0 to 1, we model the ratio [pic]. This ratio is the likelihood, or odds, of obtaining a successful outcome (the ration of the probability that a family will own a car to the probability that it will not own a car)[1].

[pic]

If we take the natural log of above equation, we obtain

[pic]

that is, L, the log of the odds ration, is not only linear in x, but also linear in the parameters. L is called the logit, and hence the name logit model.

Logit model cannot be estimated using OLS. Instead, we use MLE that discussed previous section, an iterative estimation technique that is especially useful for equations that are nonlinear in the coefficients. MLE is inherently different from least squares in that it chooses coefficient estimates that maximize the likelihood of the sample data set being observed. Interestingly, OLS and MLE are not necessarily different; for a linear equation that meets the classical assumptions (including the normality assumption), MLE are identical to the OLS.

Once the logit has been estimated, hypothesis testing and econometric analysis can be undertaken in much the same way as for linear equations. When interpreting coefficients, however, be careful to recall that they represent the impact of a one unit increase in the independent variable in question, holding the other explanatory variables constant, on the log of the odds of a given choice, not on the probability itself. But we can always compute the probability as certain level of variable in question.

2.2. Probit Model

In the case of the probit model, we assume that the [pic]. That is, we assume the error in the utility index model is normally distributed. In this case,

[pic]

where F is the standard normal cumulative density function. That is

[pic]

In practice, the c.d.f. of the logit and the probit look quite similar to one another. Once again, calculating the derivative is moderately complicated . In this case,

[pic]

where f is the density function of the normal distribution. As in the logit case, the derivative is nonlinear and is often evaluated at the mean of the explanatory variables. In the case of dummy explanatory variables, it is common to estimate the derivative as the probability [pic] when the dummy variable is 1 (other variables set to their mean) minus the probability [pic] when the dummy variable is 0 (other variables set to their mean). That is, you simply calculate how the predicted probability changes when the dummy variable of interest switches from 0 to 1.

Which Is Better? Logit or Probit

Fortunately, from an empirical standpoint, logits and probits typically yield very similar estimates of the relevant derivatives. This is because the cumulative distribution functions for the logit and probit are similar, differing slightly only in the tails of their respective distributions. Thus, the derivatives are different only if there are enough observations in the tail of the distribution. While the derivatives are usually similar, it is important to remember the parameter estimates associated with logit and probit models are not. A simple approximation suggests that multiplying the logit estimates by 0.625 makes the logit estimates comparable to the probit estimates.

Example: We estimate the relationship between the openness of a country Y and a country’s per capita income in dollars X in 1992. We hypothesize that higher per capita income should be associated with free trade, and test this at the 5% significance level. The variable Y takes the value of 1 for free trade, 0 otherwise.

Since the dependent variable is a binary variable, we set up the index function

[pic]

If [pic] (open); if [pic] (not open)

Probit estimation gives the following results:

|Dependent Variable: Y |

|Method: ML - Binary Probit (Quadratic hill climbing) |

|Date: 05/27/04 Time: 13:54 |

|Sample(adjusted): 1 20 |

|Included observations: 20 after adjusting endpoints |

|Convergence achieved after 7 iterations |

|Covariance matrix computed using second derivatives |

|Variable |Coefficient |Std. Error |z-Statistic |Prob. |

|C |-1.994184 |0.824708 |-2.418048 |0.0156 |

|X |0.001003 |0.000471 |2.129488 |0.0332 |

|Mean dependent var |0.500000 | S.D. dependent var |0.512989 |

|S.E. of regression |0.337280 | Akaike info criterion |0.886471 |

|Sum squared resid |2.047636 | Schwarz criterion |0.986045 |

|Log likelihood |-6.864713 | Hannan-Quinn criter. |0.905909 |

|Restr. log likelihood |-13.86294 | Avg. log likelihood |-0.343236 |

|LR statistic (1 df) |13.99646 | McFadden R-squared |0.504816 |

|Probability(LR stat) |0.000183 | | | |

Slope is significant at the 5% level.

The interpretation of the [pic] changes in a probit model. [pic] is the effect of X on [pic]. The marginal effect of X on [pic] is easier to interpret and is given by [pic].

[pic]

To test the fit of the model (analogous to R-squared), the maximized log-likelihood value (lnL) can be compared to the maximized log likelihood in a model with only a constant [pic] in the likelihood ratio index

[pic]

Logit estimation gives the following results:

|Dependent Variable: Y |

|Method: ML - Binary Logit (Quadratic hill climbing) |

|Date: 05/27/04 Time: 14:12 |

|Sample(adjusted): 1 20 |

|Included observations: 20 after adjusting endpoints |

|Convergence achieved after 7 iterations |

|Covariance matrix computed using second derivatives |

|Variable |Coefficient |Std. Error |z-Statistic |Prob. |

|C |-3.604995 |1.681068 |-2.144467 |0.0320 |

|X |0.001796 |0.000900 |1.995415 |0.0460 |

|Mean dependent var |0.500000 | S.D. dependent var |0.512989 |

|S.E. of regression |0.333745 | Akaike info criterion |0.876647 |

|Sum squared resid |2.004939 | Schwarz criterion |0.976220 |

|Log likelihood |-6.766465 | Hannan-Quinn criter. |0.896084 |

|Restr. log likelihood |-13.86294 | Avg. log likelihood |-0.338323 |

|LR statistic (1 df) |14.19296 | McFadden R-squared |0.511903 |

|Probability(LR stat) |0.000165 | | | |

As you can see from the output, the slop coefficient is significant at the 5% level.

The coefficients are proportionally higher in absolute value than in the probit model, but the marginal effects and significance should be similar.

[pic]

[pic]

This can be interpreted as the marginal effect of GDP per capita on the expected value of Y.

[pic]

Example :

From the household budget survey of 1980 of the Dutch Central Bureau of Statistics, J.S. Cramer obtained the following logit model based on a sample of 2820 households. (The results given here are based on the method of maximum likelihood and are after the third iteration.) The purpose of the logit model was to determine car ownership as a function of (logarithm of) income. Car ownership was a binary variable: Y=1 if a household owns a car, zero otherwise.

[pic]

t = (-3.35) (4.05)

[pic](1 df) = 16.681 (p value = 0.0000)

where [pic]= estimated logit and where Ln Income is the logarithm of income. The [pic] measures the goodness of fit of the model.

a ) Interpret the estimated logit model.

b) From the estimated logit model, how would you obtain the expression for the probability of car ownership?

c) What is the probability that a household with an income of 20,000 will own car? And at an income level of 25,000? What is the rate of change of probability at the income level of 20,000?

d) Comment on the statistical significance of the estimated logit model.

-----------------------

[1] Odds refer to the ration of the number of times a choice will be made divided by the number of times it will not. In today’s world, odds are used most frequently with respect to sporting events, such as horse races, on which bets are made.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download

To fulfill the demand for quickly locating and searching documents.

It is intelligent file search solution for home and business.

Literature Lottery

Related searches