Stat 101 Exam 3 - University of Illinois at Chicago

Stat 101 Exam 3:

Important Formulas and Concepts 1

1 Sampling Distributions and Confidence Intervals

1.1 Definitions

1. Sampling Distribution Different random samples give different values of a statistic. Distribution of the statistics over all possible samples is called the sampling distribution. Sampling distribution model shows the behavior of the statistic over all the possible samples for the same size n.

2. Sampling Distribution Model Because we can never see all possible samples, we often use a model as a practical way of describing the theoretical sampling distribution.

3. Sampling Distribution Model for a Proportion If assumptions of independence and random sampling are met, and we expect at least 10 successes and 10 failures, then the sampling distribution of a proportion is modeled by a normal model with a mean equal to the true proportion value p and has a standard deviation equal to p(1 - p)/n.

p^ N p,

p(1-p) n

4. Sampling Error Sample-to-sample variation

5. Central Limit Theorem (CLT) The sampling distribution model of the sample mean (and proportion) is approximately Normal for large n, regardless of the distribution of the population as long as the observations are independent. The larger the sample, the better the approximation will be.

6. Sampling Distribution Model for a Mean If assumptions of independence and random sampling are met, and the sample size is large enough, the sampling distribution of the sample mean is modeled by a normal model with a mean equal to the population mean and has a standard deviation equal to / n.

X N

?,

n

1This version: April 27, 2020, by Dale Embers. May not include all things that could possibly be tested on. To be used as an additional reference to studying.

7. Confidence Interval (CI) A level C confidence interval for a model parameter is an interval of values usually of the form Estimate ? Margin of Error found from data in such a way that C% of all random samples will yield intervals that capture the true parameter value.

8. Margin of Error (MOE) In a confidence interval, the extent of the interval on either side of the observed statistic value. It is typically the produce of a critical value from the sampling distribution and a standard error from the data. A small MOE corresponds to a confidence interval that pins down the parameter precisely. A large MOE corresponds to a confidence interval that gives relatively little information about the estimated parameter.

9. Critical Value The number of standard errors to move away from the mean of the sampling distribution to correspond to the specified level of confidence. The critical value, for a normal sampling distribution, denoted z, is usually found from a table or technology. The critical value, for a t-distribution, denoted t, is also found from a table or technology.

10. Some z values (Critical Values) for Confidence Intervals

CI: 90% CI 95% CI 98% CI 99% CI z 1.645 1.96 2.326 2.576

2 Chapter 16

1. One-sample z-interval for the mean This is the confidence interval for the mean. This is given by x ? zSD(X), SD(X) = . The critical value z depends on the particular confidence level that you specify. n

2.

The

margin

of

error

is

z

.

n

3. Given a desired margin of error m, solve m = z for n to get the desired sample n

size. This will result in n = ( z )2. m

3 Chapter 17

1. Hypothesis A model or proposition that we adopt in order to test.

2. Null Hypothesis (H0) The claim being assessed in a hypothesis test that states "no change from the traditional value," "no effect", "no difference", or "no relationship". For a claim to be a testable null hypothesis, it must specify a value for some population parameter that can form the basis for assuming a sampling distribution for a test statistic.

3. Alternative Hypothesis (HA) The alternative hypothesis proposes what we should conclude if we reject the null hypothesis.

4. P-value The probability of observing a value for a test statistic at least as far from the hypothesized value as the statistic value actually observed if the null hypothesis is true. A small p-value indicates either that the observation is improbable or that the probability calculation was based on incorrect assumptions. The assumed truth of the null hypothesis is the assumption under suspicion.

5. One-sample z-test for the mean

This is the hypothesis test. It tests the hypothesis H0 : ? = ?0 using the statistic

z

=

(x

-

?0)/(

n

).

6. One-sided (Tailed) Alternative An alternative hypothesis is one-sized ( for example HA : ? > ?0 or HA : ? < ?0) when we are interested in deviations in only one direction away from the hypothesized parameter value.

7. Two-sided (Tailed) Alternative An alternative hypothesis is two-sided ( for example HA : ? = ?0) when we are interested in deviations in either direction away from the hypothesized parameter value.

8. One rejects H0 and accepts Ha and calls the results statistically significant if the P value is sufficiently small (less than ).

4 Chapter 18

1. Statistically significant When the p-value falls below the alpha level, we say that the test is "statistically significant" at that alpha level.

2. Alpha level The threshold p-value that determines when we reject a null hypothesis. If we observe a statistic whose p-value based on the null hypothesis is less than , we reject that null hypothesis.

3. Significance level The alpha level is also called the significance level, most often in a phrase such as a conclusion that a particular test is "significant at the 5% significance level"

4. Type I Error The error of rejecting a null hypothesis when in fact it is true (also called a false positive). The probability of a Type I Error is .

5. Type II Error The error of failing to reject a null hypothesis when in fact it is false (also called a false negative). The probability of a Type II Error is .

6. The probability of a Type II Error is commonly denoted and depends on the effect size.

7. Power The probability that a hypothesis test will correctly reject a false null hypothesis is the power of the test. For any specific value in the alternative, the power is 1 - .

5 Chapter 20

1. Student's t distribution A family of distributions indexed by its degrees of freedom. The t-models are unimodal, symmetric, and bell shaped, but have fatter tails and a narrower center than the Normal model. As the degrees of freedom increase, t-distributions approah the Normal distribution.

2. Degrees of Freedom for Student's t distribution (df) For the t-distribution, the degrees of freedom are equal to n - 1, where n is the sample size.

3. One-sample t-interval for the mean

This is s/ n.

the confidence interval for the mean. The critical value tn-1 depends on

This is given by y ? tn-1SE(y), SE(y) = the particular confidence level that you

specify and on the number of degrees of freedom n - 1.

4. One-sample t-test for the mean

This is the hypothesis test. It tests the hypothesis H0 : ? = ?0 using the statistic

t

=

(x

-

?0)/(

s n

),

which

has

a

t-distribution

with

n

-

1

degrees

of

freedom.

5. One-sided (Tailed) Alternative An alternative hypothesis is one-sized ( for example HA : ? > ?0 or HA : ? < ?0) when we are interested in deviations in only one direction away from the hypothesized parameter value.

6. Two-sided (Tailed) Alternative An alternative hypothesis is two-sided ( for example HA : ? = ?0) when we are interested in deviations in either direction away from the hypothesized parameter value.

6 Chapter 22

1. One-sample z-interval for the population proportion

This is the confidence interval for the proportion. This is given by p^ ? z

p^ ? (1 - p^) .

n

The critical value z depends on the particular confidence level that you specify.

2. The margin of error is z

p^ ? (1 - p^) .

n

3. Given a desired margin of error m, solve m = z

p ? (1 - p) for n to get the desired

n

sample size. This will result in n = ( z )2 ? p ? (1 - p). Use an estimate for p or

m

p = .5 if no estimate exists.

4. One-proportion Z-test

A test of the null hypothesis that the proportion of a single sample equals a specified

value H0 : p = p0 by computing the test statistic z = (p^ - p0)/

p0 ? (1 - p0) . n

5. One-sided (Tailed) Alternative An alternative hypothesis is one-sized ( for example HA : p > p0 or HA : p < p0) when we are interested in deviations in only one direction away from the hypothesized parameter value.

6. Two-sided (Tailed) Alternative An alternative hypothesis is two-sided ( for example HA : p = p0) when we are interested in deviations in either direction away from the hypothesized parameter value.

7 Confidence Interval Creation and Hypothesis Testing Summary

7.1 One-Proportion

Proportion - always use p

? Confidence Interval Creation CI: p^ ? z p^(1 - p^) n

M OE

z = criticalvalue Table of critical values for z for Confidence Intervals:

CI: 90% 95% 96% 98% 99% z 1.645 1.96 2.054 2.326 2.576

? Hypothesis Testing Step 1: Write down your hypothesis H0 : p = p0 HA : p or= p0

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download