INTRODUCTION TO T-TESTS



Introduction to The t-distribution

A. What we have done so far:

• Compared one sample mean to known population (

• Assumed (2 (and ([pic]) were known exactly

• In reality, (2 is rarely known!

• Must use s2, our unbiased estimate of (2

• Can’t use z-statistic, need to use t-statistic

Research Problem:

Does time pass more quickly when we’re having fun?

Perform a pleasant task for 12 minutes.

Judge how much time has passed.

Will judgement differ significantly from 12 on average?

B. t-test for One Sample Mean

t = [pic]

where: [pic] or [pic]

[pic] replacement for [pic]

• t is a substitute for z whenever ( is unknown

• s2 estimate for (2 in the formula for standard error

• s[pic] serves as estimate of ([pic]

• df = n-1

C. Sampling Distribution of t

• a family of distributions, one for every (n)

• Not actually n, but degrees of freedom

df = n –1

larger df = t gets closer to normal curve

larger df = s2 better estimate for (2

• A sampling distribution of a statistic

• All possible random samples of size n

• symmetric, unimodal, bell-shaped

• ( = 0

• Major difference between t and z:

tails of t are more plump

center is more flat

Table of the t-distribution:

Table B.2

• Values represent critical values (CV)

• Values mark proportions in the tails

• t is symmetric, so negative values not tabled

Need three things:

(1) one-tailed or two-tailed hypothesis

(2) alpha (α) level of significance

(3) degrees of freedom (df)

• Caution, not all values of df in table

If df not there, use CV for smaller df

example: df = 43, use CV for df = 40

df = 49, use CV for df = 40

Step-by-Step Example

(1) Does a pleasant task affect judgments of time passed?

Hypothesized: (hypoth = 12

Sample data: [pic] = 10.5 s2 = 2.25 n =10

(2) Statistical Hypotheses:

assume two-tailed

H0:

H1:

(3) Decision Rule:

α =

df = (n-1)

Critical value from Table B.2 =

(4) Compute observed t for one sample mean:

t = [pic] where [pic]

s[pic] = t =

(5) Make a decision:

(6) Interpret results:

E. Additional Considerations

Bottom Line:

• Use t whenever ( is unknown

• one-sample t when you have one sample mean compared against an hypothesized population (

• Your (hyp can be any reasonable value to test against

• Hypothesis testing steps with t are same as with z

• Differences:

• use s2 to estimate (2

• need df (n-1) to determine critical value

t-test for Difference Between Two

Independent Sample Means

Typically have two sample means:

example:

Does new drug reduce depression?

Placebo: [pic]1 = 30

New Drug: [pic]2 = 25

Compare two sample means…

are they from the same population….

are the differences simply due to chance?

Research Problem:

Are people less willing to help when the person in

need is responsible for his/her own misfortune?

“Please take a moment to imagine that you're sitting in class one day and the guy sitting next to you mentions that he skipped class last week to go surfing (or because he had a terrible case of the flu). He then asks if he can borrow your lecture notes for the week. How likely are you to lend him your notes?”

1-------2-------3-------4-------5-------6-------7

I definitely would I definitely WOULD

NOT lend him my notes lend him my notes

High responsibility ( “went surfing”

Low responsibility ( “had a terrible case of the flu”

UCSB class data:

High responsibility: [pic]1 = 4.65 s2 = 2.99

Low responsibility: [pic]2 = 5.34 s2 = 2.06

Hypotheses Testing with Two Sample t

A. Statistical Hypotheses (two-tailed):

H0: (1 = (2

H1: (1 ( (2

Alternative form: H0: (1 - (2 = 0

H1: (1 - (2 ( 0

One-Tailed Hypotheses

Upper tail critical:

H0: (1 ( (2

H1: (1 > (2

Lower tail critical:

H0: (1 ( (2

H1: (1 < (2

Logic of the new t-test:

t = [pic]

[pic]1 approximates µ1 with some error

[pic]2 approximates µ2 with some error

Error for one sample mean: [pic]= [pic]

Error for two sample means: [pic]= [pic]

Note use of equivalent symbols:

[pic] = sdiff = s (M1-M2)

B. Computing independent measures t-statistic:

t = sample mean difference

estimated standard error

observed t statistic:

t = [pic]

degrees of freedom:

df = (n1 - 1) + (n2 – 1)

estimated standard error (when n1=n2):

[pic]= [pic]

-----------------------

t

Critical values of t for a two-tailed test, df =3, ( = .05

+3.182

(3.182

.025

.025

[pic]

[pic]

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download