R Critical Values

Printing Critical Values Using R: Normal, T- and F- Distributions

D Prescott

These notes are derived from information available at:



Normal and t-distributions

In the context of regression analysis, single-restriction hypothesis tests are conducted using the tdistribution. Here, it is assumed that hypotheses are tested using two-tailed tests so the rejection region for a test at the 5% level of significance is defined by:

.

and

.

The degrees of freedom df are equal to the number of observations less the number of coefficients in the linear model. The significance level is also known as the size of the test since the rejection region (the t-values for which the hypothesis is rejected) is defined by the size of the probability of a Type I error (say 0.05). Let S be the size of the test, say S = 0.05. For a twotailed test, the positive critical value is the point on the t-axis for which the area under the curve to its left is 0.975 = 1 - (S/2) and the area under the curve to its right is (S2) = 0.025. This particular point is defined by the quantile function:

1 2,

For example, if the size of the test is S =0.05 (the significance level is 5%) then 1 ? (S/2) = 0.975. Further, if the degrees of freedom are 100, the critical t-value is reported using:

> qt(0.975,100) [1] 1.983972

In this case the hypothesis is rejected if the absolute value of the calculated t-statistic is greater than 1.984

As the degrees of freedom increase, the t-distribution converges on the normal distribution so the critical values also converge. For example, suppose df = 1000. In this case the critical t-value for a test at the 1% level of significance (S/2 = 0.005) is

> qt(.995,1000) [1] 2.580755

The corresponding critical value for the normal distribution is given by:

> qnorm(.995) [1] 2.575829

These critical values get even closer if df = 10000:

> qt(.995,10000) [1] 2.576321

F-Tests

In regression analysis, F-tests are used when hypotheses imply two or more linear restrictions on the population parameters. The F-statistic has two degrees of freedom. The first is the number of linear restrictions in the null hypothesis (count the number of "=" signs), say r. The second degrees of freedom are the same as in the t-test, i.e. the number of observations less the number of parameters in the unrestricted model (the model that does not impose the restrictions in the null hypothesis). The F-statistic has the form F(r, df). The F-statistic cannot be negative so the rejection region is confined to the right-tail of the distribution. The appropriate quantile function is therefore:

1 ,,

Here, S is the size or significance level of the test, r is the number of restrictions in the null hypothesis and df is as described in the previous paragraph.

For example, an F-test of 3 linear restrictions with df = 200 at the 5% level of significance has a critical value of > qf(0.95,3,200) [1] 2.649752

F-tests and t-tests

F-tests are applied when there are multiple linear restrictions in the null hypothesis ? t-tests cannot be applied in this case. However, F-tests can also be applied when there is a single restriction. Consequently, a single restriction hypothesis can be tested by either an F-test or a t-test. In s uchcases, the two tests are equivalent. The square of the computed t-statistic will equal the computed F-statistic

(both positive) and the same relationship will hold for the critical values. As a result the two tests can never give conflicting results.

Return to the case in which S = 0.05 and df = 100

> qt(.975,100) [1] 1.983972 > qf(.95,1,100) [1] 3.936143 > (qt(.975,100))^2 [1] 3.936143

This example illustrates that the square of the critical value of the t-statistic is the same as the critical value of the F-test for the same degrees of freedom (given the null hypothesis has a single restriction.)

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download