Teaching Introductory Statistics with Activities and Data
Teaching Statistical Concepts
with Activities, Data, and Technology
CAUSE/ICPSR Workshop
Washington, DC
September 1, 2010
Beth L. Chance, Allan J. Rossman
Department of Statistics
California Polytechnic State University
San Luis Obispo, CA 93407
,
bchance@calpoly.edu, arossman@calpoly.edu
Workshop Schedule
9:30-9:45 Introductions, Goals
9:45-10:15 Opening Activity
10:15-11:00 Activities for Collecting Data
11:00-11:15 Break
11:15-11:45 Activities for Collecting Data (cont.)
11:45-12:30 Activities for Analyzing Data
12:30-1:30 Lunch
1:30-2:15 Activities for Exploring Randomness
2:15-3:30 Activities for Drawing Inferences
3:30-3:45 Break
3:45-4:15 Activities for Drawing Inferences (cont.)
4:15-4:45 Resources and Assessment
4:45-5:00 Wrap-Up, Question and Answer
Guidelines for Assessment and Instruction in Statistics Education (GAISE)
endorsed by American Statistical Association (education/gaise/GAISECollege.htm)
1. Emphasize statistical literacy and develop statistical thinking.
2. Use real data.
3. Stress conceptual understanding rather than mere knowledge of procedures.
4. Foster active learning in the classroom.
5. Use technology for developing conceptual understanding and analyzing data.
6. Use assessments to improve and evaluate student learning.
List of Activities
Opening Activity
Activity 1: Naughty or Nice?
Activities for Collecting Data
Activity 2: Sampling Words
Activity 3: Night-Lights and Near-Sightedness
Activity 4: Have a Nice Trip
Activity 5: Cursive Writing
Activity 6: Memorizing Letters
Activities for Analyzing Data
Activity 7: Matching Variables to Graphs
Activity 8: Rower Weights
Activity 9: Cancer Pamphlets
Activity 10: Draft Lottery
Activity 11: Televisions and Life Expectancy
Activity 6 Revisited
Activities for Exploring Randomness
Activity 12: Random Babies
Activity 13: AIDS Testing
Activity 14: Reese’s Pieces
Activities for Drawing Inferences
Activity 15: Which Tire?
Activity 16: Kissing the Right Way
Activity 17: Reese’s Pieces (cont.)
Activity 18: Dolphin Therapy
Activity 19: Sleep Deprivation
Activity 6 Revisited
Activity 20: Cat Households
Activity 21: Female Senators
Activity 22: Game Show Prizes
Activity 23: Government Spending
See end of handout for activity-specific “morals” given in slides, as well as references and general advice, including for implementing active learning, exam writing, and administering data collection projects.
Activity 1: Naughty or Nice?
We all recognize the difference between naughty and nice, right? What about children less than a year old- do they recognize the difference and show a preference for nice over naughty? In a study reported in the November 2007 issue of Nature, researchers investigated whether infants take into account an individual’s actions towards others in evaluating that individual as appealing or aversive, perhaps laying for the foundation for social interaction (Hamlin, Wynn, and Bloom, 2007). In one component of the study, 10-month-old infants were shown a “climber” character (a piece of wood with “google” eyes glued onto it) that could not make it up a hill in two tries. Then they were alternately shown two scenarios for the climber’s next try, one where the climber was pushed to the top of the hill by another character (“helper”) and one where the climber was pushed back down the hill by another character (“hinderer”). The infant was alternately shown these two scenarios several times. Then the child was presented with both pieces of wood (the helper and the hinderer) and asked to pick one to play with. The researchers found that the 14 of the 16 infants chose the helper over the hinderer. Researchers varied the colors and shapes that were used for the two toys. Videos demonstrating this component of the study can be found at
(a) What proportion of these infants chose the helper toy? Is this more than half (a majority)?
Suppose for the moment that the researchers’ conjecture is wrong, and infants do not really show any preference for either type of toy. In other words, these infants just blindly pick one toy or the other, without any regard for whether it was the helper toy or the hinderer. Put another way, the infants’ selections are just like flipping a coin: Choose the helper if the coin lands heads and the hinderer if it lands tails.
(b) If this is really the case (that no infants have a preference between the helper and hinderer), is it possible that 14 out of 16 infants would have chosen the helper toy just by chance? (Note, this is essentially asking, is it possible that in 16 tosses of a fair coin, you might get 14 heads?)
Well, sure, it’s definitely possible that the infants have no real preference and simply pure random chance led to 14 of 16 choosing the helper toy. But is this a remote possibility, or not so remote? In other words, is the observed result (14 of 16 choosing the helper) be very surprising when infants have no real preference, or somewhat surprising, or not so surprising? If the answer is that that the result observed by the researchers would be very surprising for infants who had no real preference, then we would have strong evidence to conclude that infants really do prefer the helper. Why? Because otherwise, we would have to believe that the researchers were very unlucky and a very rare event just happened to occur in this study. It could be just a coincidence, but if we decide tossing a coin rarely leads to the extreme results that we saw, we can use this as evidence that the infants were acting not as if they were flipping a coin but instead have a genuine preference for the helper toy (that infants in general have a higher than .5 probability of choosing the helper toy).
So, the key question now is how to determine whether the observed result is surprising under the assumption that infants have no real preference. (We will call this assumption of no genuine preference the null model.) To answer this question, we will assume that infants have no genuine preference and were essentially flipping a coin in making their choices (i.e., knowing the null model to be true), and then replicate the selection process for 16 infants over and over. In other words, we’ll simulate the process of 16 hypothetical infants making their selections by random chance (coin flip), and we’ll see how many of them choose the helper toy. Then we’ll do this again and again, over and over. Every time we’ll see the distribution of toy selections of the 16 infants (the “could have been” distribution), and we’ll count how many infants choose the helper toy. Once we’ve repeated this process a large number of times, we’ll have a pretty good sense for whether 14 of 16 is very surprising, or somewhat surprising, or not so surprising under the null model.
Just to see if you’re following this reasoning, answer the following:
(c) If it turns out that we very rarely see 14 of 16 choosing the helper in our simulated studies, explain why this would mean that the actual study provides strong evidence that infants really do favor the helper toy.
(d) What if it turns out that it’s not very uncommon to see 14 of 16 choosing the helper in our simulated studies: explain why this would mean that the actual study does not provide much evidence that infants really do favor the helper toy.
Now the practical question is, how do we simulate this selection at random (with no genuine preference)? One answer is to go back to the coin flipping analogy. Let’s literally flip a coin for each of the 16 hypothetical infants: heads will mean to choose the helper, tails to choose the hinderer.
(e) What do you expect to be the most likely outcome: how many of the 16 choosing the helper?
(f) Do you think this simulation process will always result in 8 choosing the helper and 8 the hinderer? Explain.
(g) Flip a coin 16 times, representing the 16 infants in the study. Let a result of heads mean that the infant chose the helper toy, tails for the hinderer toy. How many of the 16 chose the helper toy?
(h) Repeat this three more times. Keep track of how many infants, out of the 16, choose the helper. Record this number for all four of your repetitions (including the one from the previous question):
|Repetition # |1 |2 |3 |4 |
|Number of (simulated) infants who chose helper | | | | |
(i) How many of these four repetitions produced a result at least as extreme (i.e., as far or farther from expected) as what the researchers actually found (14 of 16 choosing the helper)?
(j) Combine your simulation results for each repetition with your classmates. Produce a well-labeled dotplot.
(k) How’s it looking so far? Does it seem like the results actually obtained by these researchers would be very surprising under the null model that infants do not have a genuine preference for either toy? Explain.
We really need to simulate this random assignment process hundreds, preferably thousands of times. This would be very tedious and time-consuming with coins, so let’s turn to technology.
(l) Use the Coin Tossing applet at applets/ to simulate these 16 infants making this helper/hinderer choice, still assuming the null model that infants have no real preference and so are equally likely to choose either toy. (Keep the number of repetitions at 1 for now.) Report the number of heads (i.e., the number of infants who choose the helper toy).
(m) Repeat (l) four more times, each time recording the number of the 16 infants who choose the helper toy. Did you get the same number all five times?
(n) Now change the number of repetitions to 995, to produce a total of 1000 repetitions of this process. Comment on the distribution of the number of infants who choose the helper toy, across these 1000 repetitions. In particular, comment on where this distribution is centered (does this make sense to you?) and on how spread out it is and on the distribution’s general shape.
We’ll call the distribution in (n) the “what if?” distribution because it displays how the outcomes (for number of infants who choose the helper toy) would vary if in fact there were no preference for either toy.
(o) Report how many of these 1000 repetitions produced 14 or more infants choosing the helper toy. (Enter 14 in the “as extreme as” box and click on “count.) Also determine the proportion of these 1000 repetitions that produced such an extreme result.
(p) Is this proportion small enough to consider the actual result obtained by the researchers surprising, assuming the null model that infants have no preference and so choose blindly?
(q) In light of your answers to the previous two questions, would you say that the experimental data obtained by the researchers provide strong evidence that infants in general have a genuine preference for the helper toy over the hinderer toy? Explain.
What bottom line does our analysis lead to? Do infants in general show a genuine preference for the “nice” toy over the “naughty” one? Well, there are rarely definitive answers when working with real data, but our analysis reveals that the study provides strong evidence that these infants are not behaving as if they were tossing coins, in other words that these infants do show a genuine preference for the helper over the hinderer. Why? Because our simulation analysis shows that we would rarely get data like the actual study results if infants really had no preference. The researchers’ result is not consistent with the outcomes we would expect if the infants’ choices follow the coin-tossing process specified by the null model, so instead we will conclude that these infants’ choices are actually governed by a different process where there is a genuine preference for the helper toy. Of course, the researchers really care about whether infants in general (not just the 16 in this study) have such a preference. Extending the results to a larger group (population) of infants depends on whether it’s reasonable to believe that the infants in this study are representative of a larger group of infants.
Let’s take a step back and consider the reasoning process and analysis strategy that we have employed here. Our reasoning process has been to start by supposing that infants in general have no genuine preference between the two toys (our null model), and then ask whether the results observed by the researchers would be unlikely to have occurred just by random chance assuming this null model. We can summarize our analysis strategy as the 3 S’s.
• Statistic: Calculate the value of the statistic from the observed data.
• Simulation: Assume the null model is true, and simulate the random process under this model, producing data that “could have been” produced in the study if the null model were true. Calculate the value of the statistic from these “could have been” data. Then repeat this many times, generating the “what if” distribution of the values of the statistic under the null model.
• Strength of evidence: Evaluate the strength of evidence against the null model by considering how extreme the observed value of the statistic is in the “what if” distribution. If the original statistic is in the tail of the “what if” distribution, then the null model is rejected as not plausible. Otherwise, the null model is considered to be plausible (but not necessarily true, because other models might also not be rejected).
In this study, our statistic is the number of the 16 infants who choose the helper toy. We assume that infants do not prefer either toy (the null model) and simulate the random selection process a large number of times under this assumption. We started out with hands-on simulations using coins, but then we moved on to using technology for speed and efficiency. We noted that our actual statistic (14 of 16 choosing the helper toy) is in the tail of the simulated “what if” distribution. Such a “tail result” indicates that the data observed by the researchers would be very surprising if the null model were true, giving us strong evidence against the null model. So instead of thinking the researchers just got that lucky that day, a more reasonable conclusion would be to reject that null model. Therefore, this study provides strong evidence to conclude that these infants really do prefer the helper toy and were not essentially flipping a coin in making their selections.
Terminology: The long-run proportion of times that an event happens when its random process is repeatedly indefinitely is called the probability of the event. We can approximate a probability empirically by simulating the random process a large number of times and determining the proportion of times that the event happens.
More specifically, the probability that a random process alone would produce data as (or more) extreme as the actual study is called a p-value. Our analysis above approximated this p-value by simulating the infants’ random select process a large number of times and finding how often we obtained results as extreme as the actual data. You can obtain better and better approximations of this p-value by using more and more repetitions in your simulation.
A small p-value indicates that the observed data would be surprising to occur through the random process alone, if the null model were true. Such a result is said to be statistically significant, providing evidence against the null model (that we don’t believe the discrepancy arose just by chance but instead reflects a genuine tendency). The smaller the p-value, the stronger the evidence against the null model. There are no hard-and-fast cut-off values for gauging the smallness of a p-value, but generally speaking:
• A p-value above .10 constitutes little or no evidence against the null model.
• A p-value below .10 but above .05 constitutes moderately strong evidence against the null model.
• A p-value below .05 but above .01 constitutes reasonably strong evidence against the null model.
• A p-value below .01 constitutes very strong evidence against the null model.
Just to make sure you’re following this terminology, answer:
(r) What is the approximate p-value for the helper/hinderer study?
(s) What if the study had found that 10 of the 16 infants chose the helper toy? How would this have affected your analysis, p-value, and conclusion? [Hint: Use your earlier simulation results but explain what you are doing differently now to find the approximate p-value.] Explain why your answers make intuitive sense.
Mathematical note: You can also determine this probability (p-value) exactly using what are called binomial probabilities. The probability of obtaining k successes in a sequence of n trials with success probability π on each trial, is: [pic].
(s) Use this expression to determine the exact probability of obtaining 14 or more successes (infants who choose the helper toy) in a sequence of 16 trials, under the null model that the underlying success probability on each trial is .5.
The exact p-value (to four decimal places) turns out to be .0021. We can interpret this by saying that if infants really had no preference and so were randomly choosing between the two toys, there’s only about a 0.21% chance that 14 or more of the 16 infants would have chosen the helper toy. Because this probability is quite small, the researchers’ data provide very strong evidence that infants in general really do have a preference for the nice (helper) toy.
Activity 2: Sampling Words
One of the most important ideas in statistics is that we can learn a lot about a large group (called a population) by studying a small piece of it (called a sample). Consider the population of 268 words in the following passage:
Four score and seven years ago, our fathers brought forth upon this continent a new nation: conceived in liberty, and dedicated to the proposition that all men are created equal.
Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battlefield of that war.
We have come to dedicate a portion of that field as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this.
But, in a larger sense, we cannot dedicate, we cannot consecrate, we cannot hallow this ground. The brave men, living and dead, who struggled here have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember, what we say here, but it can never forget what they did here.
It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us, that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion, that we here highly resolve that these dead shall not have died in vain, that this nation, under God, shall have a new birth of freedom, and that government of the people, by the people, for the people, shall not perish from the earth.
(a) Select a sample of ten representative words from this population by circling them in the passage above.
The above passage is, of course, Lincoln’s Gettysburg Address. For this activity we are considering this passage a population of words, and the 10 words you selected are considered a sample from this population. In most studies, we do not have access to the entire population and can only consider results for a sample from that population. The goal is to learn something about a very large population (e.g., all American adults, all American registered voters) by studying a sample. The key is in carefully selecting the sample so that the results in the sample are representative of the larger population (i.e., has the same characteristics).
(b) Record the word and the number of letters in each of the ten words in your sample:
| |1 |2 |3 |4 |5 |
|ID number | | | | | |
|Word | | | | | |
|Word length | | | | | |
(o) Determine the average length in your sample of five words.
Sampling frame:
|001 |Four |035 |In |
|Far-sighted |40 |39 |12 |
|Normal refraction |114 |115 |22 |
|Near-sighted |18 |78 |41 |
[pic]
(d) What does the bar graph reveal about whether myopia increases with higher levels of light exposure? Explain.
When another variable has a potential influence on the response, but its effects cannot be separated from those of the explanatory variable, we say the two variables are confounded. When we classify subjects into different groups based on existing conditions (i.e., in an observational study), there is always the possibility that there are other differences between the groups apart from the explanatory variable that we are focusing on. Therefore, we cannot draw cause/effect conclusions between the explanatory and response variables from an observational study.
(e) Is it valid to conclude that sleeping in a lit room, or with a night light, causes an increase in a child’s risk of near-sightedness? If so, explain why. If not, identify a confounding variable that offers an alternative explanation for the observed association between the variables revealed by the table, and explain why it is confounding. (Be sure to indicate how the confounding variable is related to both the explanatory and response variable. Keep in mind that the association revealed in the table and graph is real; we are just saying there could be an alternative explanation besides cause-and-effect.)
Activity 4: Have a Nice Trip
Researchers wanted to study whether individuals could be taught techniques that would help them more reliably recover from a loss of balance (e.g.,uic.edu/ahs/biomechanics/videos/edited_lowering.avi)
(a) Suppose you had 12 subjects to participate in an experiment to compare the “elevating” strategy to the “lowering” strategy. How would you design the study?
(b) Consider the Randomizing Subjects applet at
- Explore the distribution of the difference in the proportion of males in the two treatment groups under random assignment. What is the most common outcome? Is this what you would expect?
- Explore the distribution of the difference in the mean heights in the two treatment groups under random assignment. What is the mean? How often is there more than 2 inch difference in the mean heights between the two groups?
- Explore the distribution in the group differences on the hidden gene factor and the hidden x-factor. Are the groups usually balanced?
The goal of random assignment is to create groups that can be considered equivalent on all lurking variables. If we believe the groups are equivalent prior to the start of the study, this allows us to eliminate all potential confounding variables as a plausible explanation for any significant differences in the response variable after the treatments are imposed.
Activity 5: Cursive Writing
An article about handwriting appeared in the October 11, 2006 issue of the Washington Post. The article mentioned that among students who took the essay portion of the SAT exam in 2005-06, those who wrote in cursive style scored significantly higher on the essay, on average, than students who used printed block letters.
(a) Identify the explanatory and response variables in this study. Classify each as categorical or quantitative.
(b) Is this an observational study or an experiment? Explain briefly.
(c) Would you conclude from this study that using cursive style causes students to score better on the essay? If so, explain why. If not, identify a potential confounding variable, and explain how it provides an alternative explanation for why the cursive writing group would have a significantly higher average essay score.
The article also mentioned a different study in which the same exact essay was given to many graders. But some graders were shown a cursive version of the essay and the other graders were shown a version with printed block letters. Researchers randomly decided which version the grader would receive. The average score assigned to the essay with the cursive style was significantly higher than the average score assigned to the essay with the printed block letters.
(d) What conclusion would you draw from this second study? Be clear about how this conclusion would differ from that of the first study, and why that conclusion is justified.
Scope of Conclusions permitted depending on study design (adapted from Ramsey and Schafer’s The Statistical Sleuth)
| | |Allocation of units to groups | | |
| | |By random assignment |No random assignment | | |
| |Random sampling |A random sample is selected from one |Random samples are selected from| | |
| | |population; units are then randomly |existing distinct populations | |Inferences to |
| | |assigned to different treatment | | |populations can be |
| | |groups | |( |drawn |
|Selection of units| | | | | |
| |Not random |A groups of study units is found; |Collections of available units | | |
| |sampling |units are then randomly assigned to |from distinct groups are | | |
| | |treatment groups |examined | | |
| | |( | | | |
| | |Can draw cause and effect conclusions| | | |
Activity 6: Memorizing Letters
You will be asked to memorize as many letters as you can in 20 seconds, in order, from a sequence of 30 letters.
(a) Identify the explanatory variable and the response variable.
(b) What kind of study is this (observational or experimental)? Explain how you know.
(c) Did this study make use of comparison? Why is this important?
(d) Did this study make use of random assignment? Why is that important?
(e) Did this study make use of blindness? Why is that important?
(f) Did this study make use of random sampling? Why is this important?
Activity 7: Matching Variables to Graphs
Match the following variables with the histograms and bar graphs given below. Hint: Think about how each variable should behave.
(a) The height of students in this class
(b) Students’ preference for coke vs. pepsi
(c) Number of siblings of students in this class
(d) Amount paid for last haircut by students in this class
(e) Gender breakdown of students in this class
(f) Students’ guesses of Beth Chance’s age.
[pic][pic][pic][pic][pic][pic]
Write a paragraph explaining how you matched the graphs to the variables. For example, what features helped you decide?
Activity 8: Rower Weights
The table below records the weight (in pounds) of each rower on the 2008 U.S. Olympic men’s rowing team. A dotplot of these weights follows.
|Name |Weight |Event |Name |Weight |Event |
|Allen |216 |Eight |Newlin |229 |Four |
|Altman |161 |LW Four |Paradiso |154 |LW Four |
|Banks |194 |Double Sculls |Piermarini |205 |Double Sculls |
|Boyd |209 |Eight |Schnobrich |205 |Eight |
|Coppola |220 |Eight |Schroeder |225 |Quad Sculls |
|Daly |161 |LW Four |Stitt |200 |Quad Sculls |
|Gault |209 |Quad Sculls |Teti |181 |Four |
|Hoopman |200 |Eight |Todd |159 |LW Four |
|Hovey |205 |Pair |Volpenhein |225 |Four |
|Hughes |205 |Quad Sculls |Walsh |220 |Eight |
|Inman |209 |Eight |Winklevoss |209 |Quad Sculls |
|Lanzone |218 |Four |Winklevoss |209 |Pair |
|McElhenney |121 |Eight | | | |
[pic]
(a) Write a paragraph describing key features of this distribution of rowers’ weights. [Hints: Remember the checklist above, and always relate your comments to the context.]
(b) What is the name of the apparent outlier? Suggest an explanation for why his weight differs so substantially from the others.
(c) Suggest an explanation for the clusters apparent in the distribution.
(d) Calculate the mean and median of these weights, and comment on which is larger. Does this make sense?
(e) Remove the coxswain (McElhenney) from the analysis, and re-calculate the mean and median. Comment on how they change.
(f) Now remove the lightweight rowers, and re-calculate the mean and median. Comment on how they change.
(g) Now make the heaviest rower much heavier, first by 20 pounds and then by 200 pounds! Recalculate the mean and median, and comment on how they change.
A measure is said to be resistant if it is not strongly affected by outliers.
(h) Which measure of center- mean or median- is resistant to outliers?
(i) How does the shape of the distribution relate to the relative locations of mean and median?
Activity 9: Cancer Pamphlets
Researchers in Philadelphia investigated whether pamphlets containing information for cancer patients are written at a level that the cancer patients can comprehend. They applied tests to measure the reading levels of 63 cancer patients and also the readability levels of 30 cancer pamphlets (based on such factors as the lengths of sentences and number of polysyllabic words). These numbers correspond to grade levels, but patient reading levels of under grade 3 and above grade 12 are not determined exactly.
The following tables indicate the number of patients at each reading level and the number of pamphlets at each readability level:
|Patients’ reading |< 3 |3 |4 |5 |6 |
|levels | | | | | |
|Angola |38.45 |15 |Mexico |75.25 |272 |
|Australia |80.45 |716 |Morocco |70.75 |165 |
|Cambodia |59 |9 |Pakistan |63 |105 |
|Canada |80.15 |709 |Russia |67.3 |421 |
|China |72.4 |291 |South Africa |43.3 |138 |
|Egypt |71.05 |170 |Sri Lanka |73.25 |102 |
|France |79.7 |620 |Uganda |51.6 |28 |
|Haiti |52.95 |5 |United Kingdom |78.45 |661 |
|Iraq |68.75 |82 |United States |77.8 |844 |
|Japan |81.25 |719 |Vietnam |70.7 |184 |
|Madagascar |57 |23 |Yemen |61.8 |286 |
(a) Which of the countries listed has the fewest televisions per 1000 people? Which has the most? What are those numbers?
(b) Produce a scatterplot of life expectancy vs. televisions per thousand people. Does there appear to be an association between the two variables? Elaborate briefly.
(c) Since the association is so strongly positive, one might conclude that simply sending television sets to the countries with lower life expectancies would cause their inhabitants to live longer. Comment on this argument.
(d) If two variables have a strong association between them, does it follow that there must be a cause-and-effect relationship between them?
(e) In the case of life expectancy and television sets, suggest a confounding variable that is associated both with a country’s life expectancy and with the prevalence of televisions in the country.
Activity 6: Memorizing Letters Revisited
(g) Produce dotplots to compare the results between the two groups. [Calculate the five number summaries and produce comparative boxplots.] Comment on similarities and differences in the two distributions. Does the conjecture appear to be supported? Explain. What additional features are revealed by these data that make sense in this context?
Example class data
[pic]
[pic]
Activity 12: Random Babies
Suppose that on one night at a certain hospital, four mothers give birth to baby boys (Jerry Jones, Marvin Miller, Sam Smith, and Willy Williams). As a very sick joke, the hospital staff decides to return babies to their mothers completely at random.
(a) Simulate this process by shuffling four index cards marked with the babies’ first names and dealing them randomly onto a sheet marked with the mothers’ last names. Do this five times, in each case recording the number of mothers who get the right baby (a “match”).
Trial #1: Trial #2: Trial #3: Trial #4: Trial #5:
(b) Combine your results with the rest of the class, filling in the “count” row of the table:
|# of matches: |0 |1 |2 |3 |4 |
|count | | | | | |
|proportion | | | | | |
(c) Determine the proportion of times that there are 0 matches, 1 match, and so on. Record these in the “proportion” row of the table above.
• A process is random if individual outcomes are uncertain but there is a regular distribution of outcomes in a large number of repetitions.
• The probability of any outcome in a random process is the proportion of times that the outcome would occur in a very large number of repetitions (relative frequency).
• A probability can be approximated by simulating the random process a large number of times and determining the relative frequency of occurrences.
(d) Use the “Random Babies” java applet to simulate this random process a total of 1000 times. First do 1 at a time for 5 repetitions with the “animate” button checked. Then do 995 more with the “Animation” box unchecked. [Warning: The applet contains explicit visual images revealing where babies come from!] Record the resulting approximate probabilities in the table:
|# of matches: |0 |1 |2 |3 |4 |
|Approx prob | | | | | |
(e) Click on the bar in the graph corresponding to 0 matches, and the applet will reveal a graph of the relative frequency as the number of repetitions increased. Does this graph indicate that the relative frequency is varying less as time goes on, perhaps approaching a specific limiting value?
A theoretical analysis of this problem would consider all of the possible ways to distribute the four babies to the four mothers. All of the possibilities are listed here:
1234 1243 1324 1342 1423 1432
2134 2143 2314 2341 2413 2431
3124 3142 3214 3241 3412 3421
4123 4132 4213 4231 4312 4321
(f) How many possibilities are there for returning the four babies to their mothers?
(g) For each of these possibilities, indicate how many mothers get the correct baby.
(h) Count how many ways there are to get 0 matches, 1 match, and so on. Record these in the middle row of the table below:
|# of matches: |0 |1 |2 |3 |4 |
|# possibilities | | | | | |
|probability | | | | | |
(i) Determine the probability of each event by dividing these counts by your answer to (e). Record these in the last row of the table above.
(j) Are these theoretical probabilities close to the ones you approximated by simulation?
• The listing of all possible outcomes is called the sample space of the random process.
• A probability model describes all possible outcomes and assigns probabilities to them.
• In some cases it is reasonable to assume that the sample space outcomes are equally likely. (This is sometimes called the classical approach to probability.)
(k) For your class simulation results, calculate the average (mean) number of matches per repetition of the process.
• The long-run average value achieved by a numerical random process is called its expected value.
• To calculate this expected value from the (exact) probability distribution, multiply each possibility by its probability, and then add these up over all of the possible outcomes.
(l) Calculate the theoretical expected number of matches from the (exact) probability distribution, and compare that to the average number of matches from the simulated data.
Activity 13: AIDS Testing
The ELISA test for AIDS is used in the screening of blood donations. As with most medical diagnostic tests, the ELISA test is not infallible. If a person actually carries the AIDS virus, experts estimate that this test gives a positive result 97.7% of the time. (This number is called the sensitivity of the test.) If a person does not carry the AIDS virus, ELISA gives a negative result 92.6% of the time (the specificity of the test). Recent estimates are that 0.5% of the American public carries the AIDS virus (the base rate with the disease).
(a) Suppose that someone tells you that they have tested positive. Given this information, how likely do you think it is that the person actually carries the AIDS virus?
Imagine a hypothetical population of 1,000,000 people for whom these percentages hold exactly. You will fill in a two-way table as you derive Bayes’ Theorem to address the question above.
| |Positive test |Negative test |Total |
|Carries AIDS virus |(c) |(c) |(b) |
|Does not carry AIDS |(d) |(d) |(b) |
|Total |(e) |(e) |1,000,000 |
(b) Assuming that 0.5% of the population of 1,000,000 people carries AIDS, how many such carriers are there in the population? How many non-carriers are there? (Record these in the table.)
(c) Consider for now just the carriers. If 97.7% of them test positive, how many test positive? How many carriers does that leave who test negative? (Record these in the table.)
(d) Now consider only the non-carriers. If 92.6% of them test negative, how many test negative? How many non-carriers does that leave who test positive? (Record these in the table.)
(e) Determine the total number of positive test results and the total number of negative test results. (Record these in the table.)
(f) Of those who test positive, what proportion actually carry the disease? How does this compare to your prediction in a)? Explain why this probability is smaller than most people expect.
(g) Of those who test negative, what proportion are actually free of the disease?
Activity 14: Reese’s Pieces
(a) Take a random sample of 25 candies and record the number and proportion of each color:
| | |orange |yellow |brown |
| |number | | | |
| |proportion | | | |
(b) Is the candy’s color a quantitative or a categorical variable?
(c) Is the proportion of orange candies among the 25 that you selected a parameter or a statistic?
(d) Is the proportion of orange candies manufactured by Hershey a parameter or a statistic?
(e) Do you know the value of the proportion of orange candies manufactured by Hershey?
(f) Do you know the value of the proportion of orange candies among the 25 that you selected?
(g) Did every student obtain the same proportion of orange candies in his/her sample?
(h) If every student was to estimate the population proportion of orange candies by the proportion of orange candies in his/her sample, would everyone arrive at the same estimate?
• The values of statistics vary from sample to sample. This phenomenon is called sampling variability. Fortunately, if we look at the results of many samples, there is a predictable pattern to this variability.
(i) Add your sample proportion of orange candies to the graph on the board. Around what value (roughly) are the sample proportions centered?
• Since random sampling is unbiased, the actual value of the population proportion should be close to the center of these sample proportions.
(j) If every student was to estimate the population proportion of orange candies by the proportion of orange candies in his/her sample, would most estimates be reasonably close to the true parameter value? Would some estimates be way off? Explain.
We need to take more samples to see the pattern of how sample statistics vary more clearly. For this we can turn to an applet called “Reese’s Pieces.” For now we will suppose that 45% of the population is orange.
(k) Use the applet to draw 500 samples of 25 candies each, assuming that the population proportion of orange is .45. (Pretend that this is really 500 students, each taking 25 candies and counting the number of orange ones.) Sketch and describe a graph of the sample proportions of orange obtained.
(l) Is there an obvious pattern to the distribution of the sample proportions of orange candies? Is it approximately normal?
• Even though the sample proportion of orange candies varies from sample to sample, there is a recognizable long-term pattern to that variation. This pattern is called the sampling distribution of the statistic.
(m) What are the mean and standard deviation of the sample proportions of orange candies?
(n) Now assume that the population proportion of orange candies is .55. Again use the computer to draw 500 samples of 25 candies each. How has the distribution changed?
• shape:
• center:
• spread:
(o) Now use the computer to draw 500 samples of 100 candies each (so these samples are four times larger than the ones you gathered in class). How has the distribution of sample proportions changed (or not changed) from when the sample size was only 25 candies?
• shape:
• center:
• spread:
• A larger sample size produces less variability in sample statistics.
Key result: Suppose that the proportion of a population having some characteristic is denoted by π, and suppose that a random sample of size n is taken from the population. Then the sampling distribution of the sample proportion [pic] is approximately normal with mean π and standard deviation [pic]. This approximation is generally considered to be valid as long as nπ >10 and n(1-π)>10.
Activity 15: Which Tire?
A legendary story on college campuses concerns two students who miss a chemistry exam because of excessive partying but blame their absence on a flat tire. The professor allowed them to take a make-up exam, and he sent them to separate rooms to take it. The first question worth five points was quite easy, and the second question worth ninety-five points asked “which tire was it?”
(a) If you were asked to identify which tire on a car had gone flat, how would you respond? Please check one response below:
_____ left front _____ right front
_____ left rear _____ right rear
(b) Pool the responses of the class; record the counts below:
|left front |right front |left rear |right rear |
| | | | |
(c) Identify the tire that we predict will be chosen more often than “equal likeliness” would suggest.
(d) If we are wrong and there is nothing special about this particular tire, what is the “null model” we are assuming?
(e) Determine the probability of obtaining at least as many “successes” as we did if there were nothing special about this particular tire.
(f) Is this probability small enough to cast doubt on the assumption that there is nothing special about this tire? Explain.
• This probability is called a p-value. A small p-value casts doubt on the null hypothesis used to perform the calculation (in this case, that the tire we named had a 1/4 probability of being chosen).
(g) Now suppose that you take a different random sample and find that 32% of the sampled students answer with the right front tire. What additional information do you need to determine whether this sample proportion is large enough to provide strong evidence that more than 25% of all students would answer that way?
(h) Conduct the appropriate test (based on a sample proportion of .32 answering “right front”) first with a sample size of 150 and then with a sample size of 400. Report the test statistic and p-value in each case. [You might use the “Test of Significance Calculator” applet at applets/]
(i) Comment on how the strength of evidence against the hypothesis (null model) that 25% of the population would pick right front changes as the sample size increases, even as the sample proportion remains the same with 32% choosing right front. Also explain why this makes intuitive sense.
Activity 16: Kissing the Right Way
A German bio-psychologist, Onur Güntürkün, was curious whether the human tendency for right-sightedness (e.g., right-handed, right-footed, right-eyed), manifested itself in other situations as well. In trying to understand why human brains function asymmetrically, with each side controlling different abilities, he investigated whether kissing couples were more likely to lean their heads to the right than to the left (Güntürkün, 2003). He and his researchers observed couples (estimated ages 13 to 70 years, not holding any other objects like luggage that might influence their behavior) in public places such as airports, train stations, beaches, and parks in the United States, Germany, and Turkey.
Of the 124 kissing couples observed, 80 leaned their heads to the right.
(a) Calculate the sample proportion who leaned their heads to the right, and denote this with the appropriate symbol.
(b) Investigate whether these data provide strong evidence that kissing couples are more likely to lean their heads to the right than to the left. That is, is the observed result (80 of 124) very surprising if couples are equally likely to turn to the right and to the left? Carry out an appropriate simulation analysis (using the 3S approach), report an approximate p-value, and explain the reasoning behind your conclusion. Hint: Your method of analysis and reasoning process here should be very similar to your analysis of the helper/hinderer toy study.
(c) Now investigate a different null model: that two-thirds of all kissing couples lean their heads to the right. Again conduct a simulation analysis to determine if the observed result is in the tail of the “what if” distribution under this null model.
(d) Now investigate many possible values for the population proportion who lean to the right. Use multiples of .01, and conduct a simulation analysis for each. Determine the values for which the approximate (one-sided) p-value is greater than .05.
The values that produce an approximate p-value greater than .05 are not rejected and are therefore considered plausible values of the parameter. The interval of plausible values is sometimes called a confidence interval for the parameter.
A more conventional alternative for calculating a confidence interval is to take the value of the observed sample proportion and then add and subtract a certain amount, called the margin-of-error. The margin-of-error can be calculated approximately as [pic].
(e) Use this expression to calculate the margin-of-error for the kissing study.
(f) Add and subtract this margin-of-error from the value of the sample proportion, and report the resulting interval.
(g) Is the confidence interval in (f) comparable to the confidence interval in (d)?
Activity 17: Reese’s Pieces (cont.)
(a) Construct an approximate 95% confidence interval for the proportion of all candy’s that are orange
Based on the sample proportion you obtained.
We will turn to an applet called “Simulating Confidence Intervals” to illustrate how to interpret a confidence level. First make sure that the method is set to “Proportions” and “Wald.” Set the population proportion to be .45, the sample size to be 75, and the confidence level to be 95%.
(b) As you take new samples, what do you notice about the intervals? Are they all the same?
(c) Does the value of the population proportion change as you take new samples?
(d) As you take hundreds and then thousands of samples and construct their intervals, about what percentage seem to be successful at capturing the population proportion?
(e) Sort the intervals, and comment on what the intervals that fail to capture the population proportion have in common.
(f) Now change the confidence level to 90%. What two things change about the intervals?
(g) Now change the sample size to 300. Does this produce a higher percentage of successful intervals? What does change about the intervals?
These investigations reveal two important points:
• Interpreting confidence intervals correctly requires us to think about what would happen if we took random samples from the population over and over again, constructing a CI for the unknown population parameter from each sample.
• The confidence level indicates the percentage of samples that would produce a CI that successfully captures the actual value of the population parameter.
Activity 18: Dolphin Therapy
Swimming with dolphins can certainly be fun, but is it also therapeutic for patients suffering from clinical depression? To investigate this possibility, researchers recruited 30 subjects aged 18-65 with a clinical diagnosis of mild to moderate depression. Subjects were required to discontinue use of any antidepressant drugs or psychotherapy four weeks prior to the experiment, and throughout the experiment. These 30 subjects went to an island off the coast of Honduras, where they were randomly assigned to one of two treatment groups. Both groups engaged in the same amount of swimming and snorkeling each day, but one group (the animal care program) did so in the presence of bottlenose dolphins and the other group (outdoor nature program) did not. At the end of two weeks, each subjects’ level of depression was evaluated, as it had been at the beginning of the study, and it was determined whether they showed “substantial improvement” (reducing their level of depression) by the end of the study (Antonioli and Reveley, 2005).
Before the data are collected, you should anticipate outcomes/state the research hypothesis.
(a) What were the researchers hoping to show in this study?
(b) Based on the above description of the study, identify the following terms:
Observational units
Explanatory variable
Response variable
Type of study (anecdotal, observational, experimental)
How was randomness used in the study (sampling or assignment or both)
The researchers found that 10 of 15 subjects in the dolphin therapy group showed substantial improvement, compared to 3 of 15 subjects in the control group.
(c) Organize these results in a 2 × 2 table:
| |Dolphin therapy |Control group |Total |
|Showed substantial improvement | | | |
|Did not show substantial improvement | | | |
|Total | | | |
(d) Calculate the conditional proportion who improved in each group and the observed difference in these two proportions (dolphin group – control group).
Proportion in Dolphin group that substantially improved:
Proportion of Control group that substantially improved:
Difference (dolphin- control):
(e) Produce a segmented bar graph comparing the improvement rates between the two groups.
(f) What did you learn about the difference in the likelihood of improving substantially between the two treatment groups? Do the data appear to support the claim that dolphin therapy is more effective than swimming alone?
The above descriptive analysis tells us what we have learned about the 30 subjects in the study. But can we make any inferences beyond what happened in this study? Does the higher improvement rate in the dolphin group provide convincing evidence that the dolphin therapy is genuinely more effective? Is it possible that there is no difference between the two treatments and that the difference observed could have arisen just from the random nature of putting the 30 subjects into groups (i.e., the luck of the draw) and not quite getting equivalent groups? We can’t expect the random assignment to always create perfectly equal groups, but is it reasonable to believe the random assignment alone could have lead to this large of a difference?
While it is possible that the dolphin therapy is not more effective and the researchers were unlucky and happened to “draw” more of the subjects who were going to improve into the dolphin therapy group, the key question is whether it is probable. But if 13 of the 30 people were going to improve regardless of whether they swam with dolphins or not (the “null model”), we would have expected about 6 or 7 of them to end up in each group. The key question is how unlikely a 10/3 split is by this random assignment process alone.
We will answer this question by replicating the randomization process used in the study, but in a situation where we know that dolphin therapy is not effective (assuming the null model is true). We’ll start with 13 “improvers” and 17 non-improvers, and we’ll randomly assign 15 of these 30 subjects to the dolphin therapy group and the remaining 15 to the control group.
Now the practical question is, how do we do this random assignment? One answer is to use cards, such as playing cards:
• Take a regular deck of 52 playing cards and choose 13 face cards (e.g., jacks, queens, kings, plus one of the aces) to represent the 13 improvers in the study, and choose 17 non-face cards (twos through tens) to represent the 17 non-improvers.
• Shuffle the cards well and randomly deal out 15 to be the dolphin therapy group.
• Construct the 2×2 table to show the number of improves and non-improvers in each group (where clearly nothing different happened to those in “group A” and those in “group B” – any differences that arise are due purely to the random assignment process – a “could have been” distribution).
(g) Report your resulting table and calculate the conditional proportions that improved in each group and the difference (dolphin-control) between them. (If working with a partner, repeat this process a second time.)
Simulated table:
Difference in conditional proportions (dolphin – control):
(h) Is the result of this simulated random assignment as extreme as the actual results that the researchers obtained? That is, did 10 or more of the subjects in the dolphin group improve in this simulated table?
But what we really need to know is “how often” we get a result as extreme as the actual study by chance alone, so we need to repeat this random assignment process (with the 30 playing cards) many, many times. This would be very tedious and time-consuming with cards, so let’s turn to technology.
(i) Access the “Dolphin Study” applet. Click on/mouse over the deck of cards, and notice there are 30 with 13 face cards and 17 non-face cards. Click on “Randomize” and notice that the applet does what you have done: shuffle the 30 cards and deal out 15 for the “dolphin therapy” group, separating face cards from non-face cards. The applet also determines the 2×2 table for the simulated results and creates a dotplot of the number of improvers (face cards) randomly assigned to the “dolphin therapy” group.
(j) Press Randomize again. Did you obtain the same table of results this time?
(k) Now uncheck the “Animate” box, enter 998 for the number of repetitions, and press Randomize. This produces a total of 1000 repetitions of the simulated random assignment process, the “what if” distribution under the null model of no treatment effect. What are the observational units and the variable in this graph? That is, what does each individual dot represent?
(l) Based on the “what if” distribution dotplot, does it seem like the actual experimental results (10 improvers in the dolphin group) would be surprising to arise solely from the random assignment process under the null model that dolphin therapy is not effective? Explain.
(m) Click on the “Approx p-value” button to determine the proportion of these 1000 simulated random assignments that are as (or more) extreme as the actual study. (It should appear in red under the dotplot.) What region is it considering? Is this p-value small enough so that you would consider such an outcome (or more extreme) surprising under the null model that dolphin therapy is not effective? (Recall the cut-offs specified with the helper/hinderer study.)
(n) In light of your answers to the previous two questions, would you say that the results that the researchers obtained provide strong evidence that dolphin therapy is more effective (i.e., that the null model is not correct)? Explain your reasoning, based on your simulation results, including a discussion of the purpose of the simulation process and what information it revealed to help you answer this research question.
The three S’s are used again here.
• Statistic: We can use either the number of successes in group A or the difference in the conditional proportions
• Simulation: We assumed there was no treatment effect, that the number of successes and failures was not influenced by which group individuals were assigned to and we simulated the random assignment of the subjects to the treatment groups.
• Strength of evidence: Again our observed statistic (.467) was in the tail of the “what if” distribution, providing strong evidence (p-value < .05) against the null model.
Summarize:
(o) Are you willing to draw a cause-and-effect conclusion about dolphin therapy and depression based on these results? Justify your answer based on the design of the study.
(p) Are you willing to generalize these conclusions to all people who suffer from depression? How about to all people with mild to moderate depression in this age range? Justify your answer based on the design of the study.
Activity 19: Sleep Deprivation
Researchers have established that sleep deprivation has a harmful effect on visual learning. But do these effects linger for several days, or can a person “make up” for sleep deprivation by getting a full night’s sleep in subsequent nights? A recent study (Stickgold, James, and Hobson, 2000) investigated this question by randomly assigning 21 subjects (volunteers between the ages of 18 and 25) to one of two groups: one group was deprived of sleep on the night following training and pre-testing with a visual discrimination task, and the other group was permitted unrestricted sleep on that first night. Both groups were then allowed as much sleep as they wanted on the following two nights. All subjects were then re-tested on the third day. Subjects’ performance on the test was recorded as the minimum time (in milli-seconds) between stimuli appearing on a computer screen for which they could accurately report what they had seen on the screen. The sorted data and dotplots presented here are the improvements in those reporting times between the pre-test and post-test (a negative value indicates a decrease in performance):
Sleep deprivation (n = 11): -14.7, -10.7, -10.7, 2.2, 2.4, 4.5, 7.2, 9.6, 10.0, 21.3, 21.8
Unrestricted sleep (n = 10): -7.0, 11.6, 12.1, 12.6, 14.5, 18.6, 25.2, 30.5, 34.5, 45.6
[pic]
(a) Does it appear that subjects who got unrestricted sleep on the first night tended to have higher improvement scores than subjects who were sleep deprived on the first night? Explain briefly.
(b) Calculate the median of the improvement scores for each group. Is the median improvement higher for those who got unrestricted sleep? By a lot?
The dotplots and medians provide at least some support for the researchers’ conjecture that sleep deprivation still has harmful effects three days later. Nine of the ten lowest improvement scores belong to subjects who were sleep deprived, and the median improvement score was more than 12 milli-seconds better in the unrestricted sleep group (16.55 ms vs. 4.50 ms). The average (mean) improvement scores reveal an even larger advantage for the unrestricted sleep group (19.82 ms vs. 3.90 ms). But before we conclude that sleep deprivation is harmful three days later, we should consider once again this question.
(c) Is it possible that there’s really no harmful effect of sleep deprivation, and random chance alone produced the observed differences between these two groups?
You will notice that this question is very similar to questions asked of the helper/hinderer toy study and the dolphin study. Once again, the answer is yes, this is indeed possible. Also once again, the key question is how likely it would be for random chance to produce experimental data that favor the unrestricted sleep group by as much as the observed data do.
We will aim to answer that question using the same simulation analysis strategy that we used with toys and dolphins: the 3 S’s:
1. Statistic: We need a measure of the difference in the centers of the two groups, such as the difference in medians (or means).
2. Simulate: We will assume that there no negative effect of sleep deprivation (the null model) and replicate the random assignment of the 21 subjects (and their improvement scores) between the two groups. We will repeat this random assignment a large number of times,, in order to get a sense for what’s expected and what’s surprising for values of the statistic under the null model.
3. Strength of evidence? If the result observed by the researcher is in the tail of the null model’s “what if” distribution, we will reject that null model.
But there’s an important difference here as opposed to those earlier studies: the data that the researchers recorded on each subject are not yes/no responses (choose helper or hinderer, depression symptoms improved or not). In this experiment the data recorded on each subject are numerical measurements: improvements in reaction times between pre-test and post-test. So, the complication this time is that after we do the random assignment, we must do more than just count yes/no responses in the groups.
What we will do instead is, after each new random assignment, calculate the median improvement in each group and determine the difference between them. After we do this a large number of times, we will have a good sense for whether the difference in group medians actually observed by the researchers is surprising under the null model of no real difference between the two groups (no treatment effect). Note that we could just as easily use the means instead of the medians, which is a very nice feature of this analysis strategy.
One way to implement the simulated random assignment is to use 21 index cards. On each card, write one subject’s improvement score. Then shuffle the cards and randomly deal out 11 for the sleep deprivation group, with the remaining 10 for the unrestricted sleep group.
(d) Take 21 index cards, with one of the subject’s improvement scores written on each. Shuffle them and randomly deal out 11 for the sleep deprivation group, with the remaining 10 for the unrestricted sleep group. Calculate the mean improvement score for each group. Then calculate the difference between these group medians, being sure that we all subtract in the same order: unrestricted sleep minus sleep deprived.
Sleep deprivation group median: Unrestricted sleep group median:
Difference in group medians (unrestricted sleep minus sleep deprived):
(e) Combine your difference in group medians with those of your classmates. Produce a dotplot below. (Be sure to label the axis appropriately.)
(f) How many of these differences in group medians are positive, and how many are negative? How many equal exactly zero?
(g) Does it look like the distribution of these differences centers around zero? Explain why zero makes sense as the center of this distribution.
(h) How does it look so far? Granted, we want to do hundreds of repetitions, but so far, have any of the simulated results been as extreme as the researchers’ actual result (a difference of 12.05)? Does this suggest that there is a statistically significant difference between the groups? Explain.
Now we will turn to technology to simulate these random assignments much more quickly and efficiently. We’ll ask the computer or calculator to loop through the following tasks, for as many repetitions as we might request:
• “Could have been” distribution: Randomly assign group categories to the 21 improvement scores, 11 for the sleep deprivation group and 10 for the unrestricted sleep group.
• Calculate the median improvement score for each group.
• Calculate the difference in the group medians.
• “What if” distribution: Store that difference along with the others.
Then when the computer or calculator has repeated that process for, say, 1000 repetitions, we will produce a dotplot or histogram of the “what if” distribution and count how many (and what proportion) of the differences are at least as extreme as the researchers’ actual result.
(i) Open the Randomization Tests applet (available at ), and notice that the experimental data from this study already appear. Click on “Randomize” to re-randomize the 21 improvement scores between the two groups. Report the new difference in group medians (taking the sleep deprived median minus the unrestricted sleep median).
(j) Click “Randomize” again to repeat the re-randomization process. Did you obtain the same result as before?
(k) Now un-check the “Animate” feature and ask for 998 more re-randomizations. Look at the distribution of the 1000 simulated differences in group medians. Is the center where you would expect? Does the shape have a recognizable pattern? Explain.
(l) Count how many of your 1000 simulated differences in group medians are as extreme (or more extreme), in the direction favoring the unrestricted sleep group, as the researchers’ actual result (a difference of 12.05). (You can do this by clicking on “Count samples” and entering 12.05 as the appropriate value to count beyond.) Also determine the approximate p-value by calculating the proportion of your simulated differences that are at least this extreme.
(m) Do these simulation analyses reveal that the researchers’ data provide strong evidence that sleep deprivation has harmful effects three days later? Explain the reasoning process by which your conclusion follows from the simulation analyses.
(n) Regardless of whether you found the difference between the median improvement scores in these two groups to be statistically significant, is it legitimate to draw a cause-and-effect conclusion between sleep deprivation and lower improvement scores? Explain. [Hint: Ask yourself whether this was a randomized experiment or an observational study.]
Further exploration:
(o) Redo this simulation analysis, using the difference in group means rather than medians. When you re-run the simulation, you will have to tell the computer or calculator to calculate means rather than medians. Report the approximate p-value, and summarize your conclusion. Also describe how your conclusion differs (if at all) from this analysis as compared to the analysis based on group medians.
What can we conclude from this sleep deprivation study? The data provide very strong evidence that sleep deprivation does have harmful effects on visual learning, even three days later. Why do we conclude this? First, the p-value is small (about .02 for medians, .007 for means). This means that if there really were no difference between the groups (the null model), then a result at least as extreme as the researchers found (favoring the unrestricted sleep group by that much or more) would happen in about 1-2% of random assignments by chance alone. Since this would be a very rare event if the null model were true, we have strong evidence that these data did not come from a process where there was not treatment effect. So, we will reject the null model and conclude that those with unrestricted sleep have significantly higher improvement scores “on average,” compared to those under the sleep deprivation condition.
But can we really conclude that sleep deprivation is the cause of the lower improvement scores? Yes, because this is a randomized experiment, not an observational study. The random assignment used by the researchers should balance out between the treatment groups any other factors that might be related to subjects’ performances. So, if we rule out luck-of-the-draw as a plausible explanation for the observed difference between the groups (which the small p-value does rule out), the only explanation left is that sleep deprivation really does have harmful effects.
One important caveat to this conclusion concerns how widely this finding can be generalized. The subjects were volunteers between the ages of 18-25 in one geographic area. They were not a random sample from any population, so we should be cautious before generalizing the results too broadly.
Mathematical note: If you are wondering whether an exact mathematical calculation of the p-value is possible here, the answer is yes. But the calculation is more difficult than with yes/no variables. It turns out that there are 352,716 different ways to assign 21 subjects into one group of 11 and another group of 10. Of these 352,716 possible randomizations, it turns out that 2533 of them produce a difference in group means (favoring the unrestricted sleep group) of at least 15.92. The exact p-value is therefore 2533/352,716 ≈ .0072. The exact randomization distribution is shown here:
[pic]
Activity 6: Memorizing Letters Revisited
Recall the study that randomly assigned subjects to two types of grouping of letters to determine whether there is a difference in how many letters subjects remembered in order.
(a) Explain how you would use the Randomization Tests applet to explore the significance of the data we collected for this study?
(b) Perform a two-sample t-test for these data. Are the conditions met for this procedure? Compare the p-values obtained by the two approaches.
(c) Summarize your conclusions for this study, being sure to comment on significance, confidence, causation, and generalizability.
Activity 20: Cat Households
A sample survey of 47,000 households in 2007 found that 32.4% of American households own a pet cat.
a) Is this number a parameter or a statistic? Explain, and indicate the symbol used to represent it.
b) Use technology (possibly the Test of Significance Calculator applet) to conduct a test of whether the sample data provide evidence that the population proportion who own a pet cat differs from one-third. State the hypotheses, and report the test statistic and p-value. Draw a conclusion in the context of this study.
c) Use technology to produce a 99% confidence interval (CI) for the population proportion who own a pet cat. Interpret this interval.
d) Do the sample data provide very strong evidence that the population proportion who own a pet cat is not one-third? Explain whether the p-value or the CI helps you to decide.
e) Do the sample data provide strong evidence that the population proportion who own a pet cat is very different from one-third? Explain whether the p-value or the CI helps you to decide.
Activity 21: Female Senators
Suppose that an alien lands on Earth, notices that there are two different sexes of the human species, and sets out to estimate the proportion of humans who are female. Fortunately, the alien had a good statistics course on its home planet, so it knows to take a sample of human beings and produce a confidence interval. Suppose that the alien happened upon the members of the 2010 U.S. Senate as its sample of human beings, so it finds 17 women and 83 men in its sample.
(a) Use this sample information to form a 95% confidence interval for the actual proportion of all humans who are female.
(b) Is this confidence interval a reasonable estimate of the actual proportion of all humans who are female?
(c) Explain why the confidence interval procedure fails to produce an accurate estimate of the population parameter in this situation.
(d) It clearly does not make sense to use the confidence interval in (a) to estimate the proportion of women on Earth, but does the interval make sense for estimating the proportion of women in the 2010 U.S. Senate? Explain your answer.
This example illustrates some important limitations of inference procedures.
• First, they do not compensate for the problems of a biased sampling procedure. If the sample is collected from the population in a biased manner, the ensuing confidence interval will be a biased estimate of the population parameter of interest.
• A second important point to remember is that confidence intervals and significance tests use sample statistics to estimate population parameters. If the data at hand constitute the entire population of interest, then constructing a confidence interval from these data is meaningless. In this case, you know precisely that the proportion of women in the population of the 2009 U.S. Senators is .17 (exactly!), so it is senseless to construct a confidence interval from these data.
Activity 22: Game Show Prices
We will examine data on the prices of a sample of 208 prizes used on The Price is Right game show, during the contestant selection phase of the game (when contestants bid on prizes to see who comes closest). These data are the actual prices of those prizes.
(a) Produce (and submit) a histogram of the distribution of these prices. Comment on what this histogram reveals about the distribution. [Be sure to relate your comments to the context, and refer to the shape, center, spread, and outliers (if any).]
(b) Produce a 99% confidence interval for the population mean price.
(c) Comment on whether the technical conditions that underlie the validity of these t-procedures appear to be met. Explain.
(d) Determine how many and what proportion of these 208 prizes have a price that falls within the confidence interval in (b).
(e) Should this proportion (in (d)) be close to .99? Explain why or why not.
Many students have trouble understanding what “95% confidence” means, recognizing that this is a long-term property about this inference method. Even more importantly, though, many students do not understand what parameter a confidence interval estimates the value of.
Activity 23: Government Spending
One advantage to the simulation approach is it allows you to extend analyses to more complicated statistics.
Some of the questions on the 2004 General Social Survey (GSS) concerned political viewpoint and opinion about the federal government’s spending priorities. The following two-way table presents a summary of responses with regard to spending on the environment:
| |Liberal |Moderate |Conservative |Total |
|Too Much |1 |17 |32 |50 |
|About Right |27 |80 |91 |198 |
|Too Little |127 |158 |113 |398 |
|Total |155 |255 |236 |646 |
(a) Produce numerical and graphical summaries to explore whether opinions on federal spending on the environment appears to vary with political leaning. Summarize your results.
(b) How can we apply the 3S approach to determine whether the association between political inclination and opinions on environmental spending is statistically significant?
Statistic: A common test statistic used in this scenario is the chi-square statistic:
[pic]
This turns out to be 52.12 of the above table.
(c) Do we expect this test statistic to be large or small when the null model is true? Explain.
Simulation: One way to carry out a simulation under the null model that there is no association between these two variables, is to take the existing variables, and randomly mix one up, then see what the resulting “could have been” two-way table looks like, and calculate the chi-square statistic for each table for a larger number of repetitions.
Strength of evidence: We will have strong evidence against the null model if our observed statistic, 52.12, is in the tail of the “what if” distribution.
While not a built in function, many programs (even stat packages) can be convinced to do this simulation for you. For example, in Minitab, you can write a macro like:
|GMACRO |
| |
|Do k1=1:1000 |
| |
|sample 646 c6 c8 |
|let c11(1)=sum(c7="too little" & c8="liberal") |
|let c11(2)=sum(c7="about right" & c8="liberal") |
|let c11(3)=sum(c7="too much" & c8="liberal") |
|let c12(1)=sum(c7="too little" & c8="moderate") |
|let c12(2)=sum(c7="about right" & c8="moderate") |
|let c12(3)=sum(c7="too much" & c8="moderate") |
|let c13(1)=sum(c7="too little" & c8="conservative") |
| let c13(2)=sum(c7="about right" & c8="conservative") |
|let c13(3)=sum(c7="too much" & c8="conservative") |
| |
|stack c11 c12 c13 c16 |
| |
|let c18=(c16-c17)*(c16-c17)/c17 |
| |
|let c19(k1)=sum(c18) |
| |
|ENDDO |
|ENDMACRO |
(d) Below are example results of this simulation of the chi-square statistic. What are your conclusions? Justify your answers.
[pic]
The following segmented bag graph pertains to similar data from the 2004 GSS when people were asked about their opinion regarding the federal government’s spending on the space program.
[pic] [pic]
(e) With which issue does opinion about government spending come closer to being independent of political viewpoint – environment or space program? Explain how you can tell.
(f) Do you expect the p-value for these data to be larger or smaller than for environment spending? Explain.
(g) Below is the histogram for the simulation of the chi-square statistics for the space spending. How do the results under the null model compare to those in (d)? Is this what you expected?
[pic]
(h) The chi-square value for the observed data equals 7.008. How does the p-value compare? Is this what you expected? Explain.
General Advice
▪ Emphasize the process of statistical investigations, from posing questions to collecting data to analyzing data to drawing inferences to communicating findings.
▪ Use simulation, both tactile and technology-based, to explore concepts of inference and randomness. Minimize probability for its own sake.
▪ Draw connections between how data are collected (e.g., random assignment, random sampling) and scope of conclusions to be drawn (e.g., causation, generalizability).
▪ Use real data from genuine studies, as well as data collected on students themselves.
▪ Present important studies (e.g., draft lottery) and frivolous ones (e.g., flat tires) and especially studies of issues that are directly relevant to students (e.g., sleep deprivation).
▪ Lead students to “discover” and tell you important principles (e.g., association does not imply causation).
▪ Keep in mind the research question when analyzing data.
▪ Graphical displays can be very useful.
▪ Summary statistics (measures of center and spread) are helpful but don’t tell whole story; consider entire distribution.
▪ Develop graph-sense and number-sense by always thinking about the context of the data.
▪ Use technology to reduce the burden of rote calculations, both for analyzing data and exploring concepts (e.g., sample size effects).
o Use simulations and visualizations to motivate important formulas.
▪ Emphasize cautions and limitations with regard to inference procedures
o Statistical vs. practical significance
o Importance of random sampling or random assignment
o Estimate size of effect with confidence interval
o Confidence interval is not prediction interval
Morals from Specific Activities
Activity 1: Naughty or Nice?
▪ Use real data/scientific studies
o Emphasize the process of statistical investigation
▪ Stress conceptual understanding
o Idea of p-value on day 1/in one day!
▪ Foster active learning
o You are a dot on the board
▪ Use technology
o Could this have happened “by chance alone”
o What if only 10 infants had picked the helper?
Activity 2: Sampling Words
• Random Sampling eliminates human selection bias so the sample will be fair and unbiased/representative of the population.
• While increasing the sample size improves precision, this does not decrease bias.
Activity 3: Night-Lights and Near-Sightedness
• Students can tell you that association is not the same as causation!
• Need some practice clearly identify how confounding variable is linked to both explanatory and response variables.
Activity 4: Have a Nice Trip
• The goal of random assignment is to create groups that can be considered equivalent on all lurking variables.
o If we believe the groups are equivalent prior to the start of the study, this allows us to eliminate all potential confounding variables as a plausible explanation for any significant differences in the response after treatments are imposed.
Activity 5: Cursive Writing
• Scope of conclusions depends on how data were collected
o Random sampling: generalize to population
o Random assignment: cause/effect between explanatory and response variables
Activity 6: Memorizing Letters
• Quick, simple experimental data collection
o Highlighting critical aspects of effective study design
• Can return to the data several times in the course
Activity 7: Matching Variables to Graphs
• Develop graph-sense
• Learn to justify their opinions
o Consistency, completeness
• Appreciating Variability
o Be able to find and explain patterns in the data
Activity 8: Rower Weights
• Context is crucial for understanding/interpreting distribution
• Investigate effects of outliers
Activity 9: Cancer Pamphlets
• Look at the data
• Think about the research question
• Numerical summaries don’t tell the whole story
o “Median isn’t the message” – Stephen Jay Gould
Activity 10: Draft Lottery
• Statistics matters!
• Summaries can illuminate
• Randomization can be difficult
Activity 11: Televisions and Life Expectancy
• Don’t jump to conclusions from observational studies
• The association is real but consider carefully the wording of conclusions
Activity 12: Random Babies
• Goal: Interpretation of probability, expected value in terms of long-run relative frequency, average value
• 30% chance of rain…
• First simulate, then do theoretical analysis
• Able to list sample space
• Short cuts when are actually equally likely
• Simple, fun applications of basic probability
Activity 13: AIDS Testing
• Intuition about conditional probability can be very faulty
• Confront misconception head-on with surprising result
• Conditional probability can be explored through two-way tables
Activity 14: Reese’s Pieces
• Study randomness to develop intuition for statistical ideas
o Not probability for its own sake
• Always precede technology simulations with physical ones
• Apply more than derive formulas
• Sampling distribution is key, but very challenging, concept
Activity 15: Which Tire?
• Fun simple data collection
• Effect of sample size
o hard to establish result with small samples
• Never “accept” null hypothesis
Activity 16: Kissing the Right Way
• Interval estimation as (more?) important as significance
• Confidence interval as set of plausible (not rejected) values
• Interpretation of margin-of-error
Activity 17: Reese’s Pieces (cont.)
• Interpretation of confidence level
o In terms of long-run results from taking many samples
• Effect of confidence level, sample size on confidence interval
Activity 18: Dolphin Therapy
• Re-emphasize meaning of significance and p-value
o Use of randomness in study
• Focus on statistical process, scope of conclusions
Activity 19: Sleep Deprivation
• Randomization tests emphasize core logic of inference
o Take advantage of modern computing power
o Easy to generalize to other statistics
Activity 20: Cat Households
• Statistical significance is not practical significance
o Especially with large sample sizes
• Accompany significant tests with confidence intervals whenever possible
Activity 21: Female Senators
• Always consider sampling procedure
o Randomness is key assumption
o Garbage in, garbage out
• Inference is not always appropriate!
o Sample = population here
Activity 22: Game Show Prizes
• Prediction intervals vs. confidence intervals
• Pay constant attention to what the “it” is
Activity 23: Government Spending
• Simulating randomization test also applies with more complicated scenarios
o Can justify theoretical basis for common (e.g., chi-square) procedures
• Don’t neglect simple graphical, numerical analysis
Implementing Active Learning
We offer the following suggestions for implementing active learning in your classroom.
Suggestion #1: Take control of the course
• Not “control” in usual sense of standing at front dispensing information
• But still need to establish structure, inspire confidence that activities, self-discovery will work
• Be pro-active in approaching students
o Don’t wait for students to ask questions of you
o Ask them to defend their answers
o Be encouraging
• Instructor as facilitator/manager
Suggestion #2: Collect data from students
• Leads them to personally identify with data, analysis; gives them ownership
• Collect anonymously
• Can do out-of-class
• E.g., Matching variables to graphs
Suggestion #3: Encourage predictions from students
• Fine (perhaps better…) to guess wrong, but important to take stake in some position
• Directly confront common misconceptions
o Have to “convince” them they are wrong (e.g., Gettysburg address) before they will change their way of thinking
• E.g., Variability vs. bumpiness, Effect of sample size, AIDS testing
Suggestion #4: Allow students to discover, tell you findings
• “I hear, I forget. I see, I remember. I do, I understand.” -- Chinese proverb
• E.g., Association vs. causation (televisions and life expectancy)
Suggestion #5: Precede technology simulations with tactile/ concrete/hands-on simulations
• Enables students to understand process being simulated
• Prevents technology from coming across as mysterious “black box”
• E.g., Gettysburg Address (actual before applet)
Suggestion #6: Promote collaborative learning
• Students can learn from each other
o Better yet from “arguing” with each other
• Students bring different background knowledge
• E.g., Rowers’ weights, Matching variables to graphs
Suggestion #7: Provide lots of feedback
• Danger of “discovering” wrong things
• Provide access to “model” answers after the fact
o Could write “answers” on board or put on web between classes
o Could lead discussion/debriefing afterward
Suggestion #8: Follow activities with related assessments
• Or could be perceived as “fun and games” only
o Require summary paragraphs in their own words
o Clarify early (e.g., quizzes) that they will be responsible for the knowledge
• Assessments encourage students to grasp concept
o Can also help them to understand concept
• E.g., Create Simpson’s paradox example
Suggestion #9: Inter-mix lectures with activities
• One approach: Lecture on a topic after students have performed activity
o Students better able to process, learn from lecture after having grappled with issues themselves first
• Another approach: Engage in activities toward end of class period
o Often hard to re-capture students’ attention afterward
• Another approach: Move some of the ‘pre-activity” work outside of class (e.g., read study background, technology)
• Need frequent variety
Suggestion #10: Do not under-estimate ability of activities to “teach” material
• No dichotomy between “content” and “activities”
• Some activities address many ideas
• E.g. “Gettysburg Address” activity
o Population vs. sample, parameter vs. statistic
o Bias, variability, precision
o Random sampling, effect of sample/population size
o Sampling variability, sampling distribution, Central Limit Theorem (consequences and applicability)
Suggestion #11: Have fun!
Final Exams
We offer the following final exams that we have recently given. We have several goals on our exams:
• Carefully match the course goals
• Be cognizant of any review materials you have given the students (“from chapter 2, you should be able to active verb…”)
• Use real data and genuine studies
• Provide students with guidance for how long they should spend per problem
• Use multiple parts to one context but aim for independent parts (if a student cannot answer part (a) they may still be able to answer part (b))
• Use open-ended questions requiring written explanation
• Aim for at least 50% conceptual questions rather than pure calculation questions
• (Occasionally) Expect students to think, integrate, apply beyond what they have learned.
o Turn questions around (e.g., what research question could lead to this output)
o Explanation with multiple choice (e.g., pick one wrong answer and explain why it is wrong)
Sample Final Exam A
1. For each of the following quantities, indicate whether it can NEVER be negative or can SOMETIMES be negative. (No explanations are necessary.)
a) z–score __________ b) p-value __________
c) Slope coefficient __________ d) Correlation coefficient __________
e) Difference in sample means __________ f) Sample size __________
g) Sample proportion __________ h) Coefficient of determination __________
i) Standard deviation __________ j) Inter-quartile range __________
2. Some of the statistical inference techniques in this course include:
A. One-sample t-procedures for a mean
B. Two-sample t-procedures for comparing means
C. Paired-sample t-procedures
D. One-sample z-procedures for a proportion
E. Two-sample z-procedures for comparing proportions
F. Chi-square procedures for two-way tables
G. Linear regression procedures
For each of the following questions, identify (by letter) the procedure that you would use to investigate that question. Also indicate (either in symbols or in words) the null and alternative hypothesis to be tested in each case.
a) A recent headline announced that mountain biking may reduce fertility in men. The article was based on a study which found that 90% of the avid mountain bikers in the study had low sperm counts, and 26% of the non-bikers in the study had low sperm counts.
b) Some critics of the military have claimed that members of the U.S. Armed Forces have an average IQ less than 100. Suppose that you take a random sample of members of the U.S. Armed Forces and measure their IQs, in order to test this claim.
c) A recent investigation reported in the November 2007 issue of Nature aimed at assessing whether infants take into account an individual’s actions towards others in evaluating that individual as appealing or aversive, perhaps laying for the foundation for social interaction. In one experiment, 10-month-old infants were shown a “climber” character (a piece of wood with “google” eyes glued onto it) that could not make it up a hill in two tries. Then they were alternately shown two scenarios for the climber’s next try, one where the climber was pushed to the top of the hill by another character (“helper”) and one where the climber was pushed back down the hill by another character (“hinderer”). (They were alternately shown these two scenarios several times.) Then the child was presented with both pieces of wood (the helper and the hinderer) and asked to pick one to play with. Researchers kept track of which toy was chosen by each child and then tested whether the sample data provided evidence that infants would select the helper toy more than half the time.
d) A waitress kept track of how large a tip she received from each table of people that she served. She wanted to see if she received a significantly higher tip on average when she drew a smiley face on the check as compared to when she did not.
3. Short answer:
a) Consider three students with the following distributions of 22 quiz scores:
Alexis: 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8
Barack: 5, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 10
Charlene: 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10
Which student would have the smallest standard deviation? Which would have the largest standard deviation? Explain briefly, but do not perform any calculations.
b) A college professor asked students in his class to report how many miles they were from where they were born. The distribution was sharply skewed to the right with some very large outliers. How would the mean distance compare to the median distance?
c) It can be shown that the sum of the residuals from a least squares regression line equals zero. Does it follow that the median of the residuals must equal zero? (No explanation is necessary.)
d) It can be shown that the sum of the residuals from a least squares regression line equals zero. Does it follow that the mean of the residuals must equal zero? (No explanation is necessary.)
e) Suppose that every student in this class scores ten points lower on exam2 than on exam1 in this course. What would be the value of the correlation coefficient between exam1 score and exam2 score? Explain briefly.
f) Suppose that you conduct a hypothesis test and decide to reject the null hypothesis at the α = .01 significance level. If you decided instead to use the α = .02 significance level, would you reject the null hypothesis, fail to reject the null hypothesis, or do you not have enough information to say? Explain briefly.
g) Suppose that the IQs of San Luis Obispo residents follow a normal distribution with mean 105 and standard deviation of 12. Which is more likely: that a randomly selected resident will have an IQ above 120 or that the average IQ in a random sample of 10 residents will exceed 120? Explain briefly. (You do not need to perform any calculations.)
h) Suppose that you want to test whether more than one-third of Cal Poly students use a bicycle to get to campus. You take a random sample of students and find that 40% of the sample uses a bicycle to get to campus. What additional information do you need in order to calculate a p-value to determine if this sample result provides strong evidence that more than one-third of all students use a bicycle to get to campus?
i) What’s wrong with expressing a null hypothesis as H0: [pic]?
j) In an experiment that compared acupuncture to conventional therapy for treating patients with chronic low back pain, a 95% confidence interval for the difference in population proportions experiencing improvement was calculated to be (.135, .265). Does this suggest that the difference in improvement proportions between the two groups is statistically significant? Explain briefly.
4. For each of the following situations, indicate whether a matched-pairs analysis would be appropriate. (Answer yes or no; You do not need to explain.)
a) You want to investigate whether left-handed people tend to live less long than right-handed people. You take a random sample of obituary notices, determine whether the person was left- or right-handed, and report the age at death.
b) You want to investigate whether infants tend to have longer attention spans when they are 24 months old as opposed to when they are 12 months old. You study a sample of 12-month-old infants, recording how long they watch a video before becoming distracted. Then a year later, you study the same infants as 24-month-olds, again recording how long they watch a video before becoming distracted.
c) You want to investigate whether there is a difference in the average ages of cars driven by students and faculty at your university, so you randomly select 30 students and 10 faculty members and ask them to report the age of the car that they drive.
d) You want to investigate whether a waitress receives a significantly higher tip on average when she draws a smiley face on the check than when she does not draw a smiley face. Over the course of a week, she flips a coin for each party to decide whether or not to draw a smiley face on their check, and then she keeps track of the tip amount.
5. One common application of statistical methods is to estimate a population parameter based on a sample statistic, often with a confidence interval. Convince me that you understand this by describing a hypothetical situation (not an example from the book or from class) in which you would be interested in using a sample statistic to estimate a population parameter. Be very clear in describing what the population is, what the parameter is, what the sample is, and what the statistic is.
6. Statisticians analyzed a huge set of data about people who were sent a mass mailing solicitation (request for donations) from a nonprofit organization. The observational units were the recipients of the mailing. The two response variables of interest were:
• Whether or not the person sent a donation
• How much the person donated
Hundreds of explanatory variables were recorded about these mailing recipients. These included:
• Age
• Whether the person owns a home, rents a home, or did not answer that question
• How many mailings the person received in the previous 12 months
• Whether or not the person donated in the previous 12 months
• How much the person donated in the previous 12 months
a) State one research question (make up your own, based on these variables) that could be investigated with these data, for which comparing boxplots would be a reasonable technique.
b) State one research question that could be investigated with these data, for which a scatterplot would be a reasonable graph.
7. An article in the December 2006 issue of the Journal of Adolescent Health reports on a study of 2516 teenage girls in Minnesota. Girls who reported weighing themselves frequently gained an average of 33.3 pounds over five years, compared to an average weight gain of 18.6 pounds for girls who did not weigh themselves frequently.
a) Identify the explanatory variable and the response variable in this study.
b) What type of study is this: an observational study or an experiment? Explain how you can tell.
c) State the relevant null hypothesis (in symbols) to be tested from these data.
d) Identify the name of the procedure that you would use to conduct this test.
e) What additional information would you need in order to conduct this test? (Do not worry about whether the technical conditions are met.)
8. Student researchers investigated whether balsa wood is less elastic after it has been immersed in water. They took 44 pieces of balsa wood and randomly assigned half to be immersed in water and the other half not to be. They measured the elasticity by seeing how far the piece of wood would project a dime into the air. Their (ordered) results (in inches) are shown below:
Immersed: 4.00, 4.00, 4.75, 5.00, 5.50, 6.00, 6.00, 6.25, 7.25, 7.25, 7.25,
7.50, 8.00, 8.00, 8.25, 9.25, 9.25, 10.25, 10.25, 10.75, 10.75, 12.25
Not: 6.50, 8.00, 9.00, 10.00, 10.00, 10.25, 10.25, 10.25, 10.25, 11.00, 11.25
12.00, 12.00, 12.00, 12.25, 12.25, 14.00, 14.00, 15.00, 16.00, 16.00, 17.00
a) Calculate the five-number summary of elasticity measures for each group.
b) Comment on what the five-number summaries reveal about the research question that the students were investigating.
Consider the following summary statistics:
| |Sample size |Sample mean |Sample std dev |
|Immersed group |22 |7.625 |2.332 |
|Not immersed group |22 |11.784 |2.674 |
c) Use these statistics to conduct a significance test of the students’ research question. Report the hypotheses, test statistic, and p-value. Also summarize your conclusion.
d) Can the students legitimately conclude that the immersion in water causes the lesser elasticity of balsa wood? Explain.
9. Biologists conducted a study to investigate whether the weight of a bullfrog is positively associated with its jumping distance. They gathered data on these two variables for a sample of 11 bullfrogs, calculating a correlation coefficient of .182. Use this sample result to test whether the weight of a bullfrog is positively correlation with its jumping distance. Report the null and alternative hypotheses, test statistic, and p-value. Also summarize your conclusion in context.
Sample Final Exam B
1. The following data are the point totals for the UOP Men's Basketball team in their first 8 victories one season: 80 72 68 55 80 78 90 85
a) Make a stemplot of these winning point totals and describe the shape of the distribution.
b) Would the five-number summary, or the mean and standard deviation, be a better summary for this distribution? Explain your choice.
2. Two investigators wanted to study the heights of 18-24 year old men in Stockton. One investigator, Happy Harry, took a random sample of 100 men. The other investigator, Tired Tony, took a random sample of 1000 men.
a) If each investigator finds the average height of the men in his sample, which investigator, Harry or Tony, should find a larger average, or will they be about the same? Explain.
b) Which sample, Harry’s or Tony’s, should have less bias or will they be about the same? Explain.
c) Which estimate of the population mean, Harry’s or Tony’s, should have higher precision, or will they be about the same? Explain.
3. In 1988, men averaged abut 500 on the math SAT, the standard deviation was about 100, and their scores followed a normal distribution. One of the men who took the math SAT in 1988 will be picked at random, and you have to guess his test score. You will be given 50 dollars if you guess it right to within 50 points.
a) What one number should you guess?
b) With this guess, what is your probability of winning the 50 dollars?
4. Suppose you record the speed for 50 cars traveling on an uncrowded interstate highway. Do you expect the standard deviation of these 50 measurements to be about 1 mph, 5 mph, 10 mph, or 20 mph? Explain
5. A social research scientist wants to test whether the percentage of Republicans who favor the death penalty is greater than the percentage of Democrats who are in favor of the death penalty.
Suppose the sample data showed that the percentage of Republicans who are in favor of the death penalty is 42% and the percentage of Democrats who are in favor of the death penalty is 40%.
a) Write down the null and alternative hypotheses for this test.
b) Suppose the p-value for this test is .0021. The 95% confidence interval for πR-πD is (.00637, .03363). Which of the following conclusions is more appropriate to draw?
1. There is evidence of a large difference in the two proportions.
2. There is strong evidence of a difference in the two proportions.
Explain your choice.
c) Which of these two conclusions does a p-value better address? Explain.
d) Which of these two conclusions does a confidence interval better address? Explain.
6. In a clinical trial, data collection usually starts at “baseline,” when the subjects are recruited into the trial but before they are randomized to treatment and control groups. Data collection continues until the end of follow-up. Two clinical trials on prevention of heart attacks report baseline data on weight, shown below.
| | |Number of persons |Average weight |Standard deviation |
|Trial 1 |Treatment |1,012 |185 lb |25 lb |
| |Control |997 |143 lb |26 lb |
|Trial 2 |Treatment |995 |166 lb |27 lb |
| |Control |1,017 |163 lb |25 lb |
In one of these trials, the randomization did not achieve the desired result. Which trial and why do you say so? How will this affect our results and conclusions for this study?
7. Can pleasant aromas help a student learn better? Two researchers believed that the presence of a floral scent could improve a person’s learning ability in certain situations. They had ten people work through a pencil and paper maze 2 times, first wearing an unscented mask and then wearing a scented mask. Tests measured the length of time it took subjects to complete each of the two trials. They reported that, on average, subjects wearing the floral-scented mask completed the maze more quickly than those wearing the unscented mask.
a) Is this an observational study or experiment? Explain.
b) Identify the response and explanatory variables.
c) Explain how confounding makes the results of this study worthless.
d) Sketch an outline of a more appropriate design for the study.
8. The NCAA collected data on graduation rates of athletes in Division I in the mid-1980s. Among 2,332 men, 1,343 had not graduated from college, and among 959 women, 441 had not graduated.
a) Set up a two-way table to examine the relationship between gender and graduation.
b) Calculate a couple of conditional percentages to describe the relationship between gender and graduation.
c) Identify a test procedure that would be appropriate for analyzing this relationship. State the null and alternative hypotheses.
d) What type of distribution does the test statistic you describe in (c) follow? For what values of this test statistic will you reject the null hypothesis at the 5% level?
e) If the above result is significant, would this mean that if some people have a sex change they will increase their chance of graduating? Explain briefly.
9. A random sample of 7 households was obtained, and information on their income and food expenditures for the past month was collected. The data (in hundreds of dollars) are given below.
|Monthly Income ($100s) |22 |32 |16 |37 |12 |27 |17 |
|Monthly Food Expenditures ($100s) |7 |8 |5 |10 |4 |6 |6 |
Consider the following computer output and scatterplot, with the least squares line superimposed:
The regression equation is
expend = 1.87 + 0.202 income
Predictor Coef Stdev t-ratio p
Constant 1.8690 0.9068 2.06 0.094
income 0.20195 0.03661 5.52 0.003
s = 0.8181 R-sq = 85.9%
[pic]
a) Describe the direction, strength, and form of the association.
b) On the graph, identify the point which you think has the largest (in absolute value) residual. Explain.
c) Provide an interpretation of the number 0.202 in the regression equation, in the context of these data. Exactly what does this value tell us?
d) Is there evidence of a statistically significant relationship between income and food expenditure? Make sure you clearly explain the basis for your answer.
e) Explain why you would not recommend using this relationship to predict the food expenditure for a household with an income of $5,200.
10. National data show that, on the average, college freshmen spend 7.5 hours a week going to parties. President DeRosa doesn’t believe that these figures apply at UOP. He takes a simple random sample of 50 freshmen, and interviews them. He finds that the 95% confidence interval for the number of hours spent a week going to parties is (5.72, 7.42).
a) Explain to the President what he means by the phrase “95% confidence.”
Now he wants to test the hypothesis that the mean for UOP is different from the national mean at a 5% significance level.
b) Specify the null and alternative hypotheses for this test.
c) Indicate a test procedure he could use to conduct this test.
d) Eager to gain favor with the president, you tell him that you can save him lots of time because, based on the information already presented, you know what he will conclude and he doesn’t have to perform any additional calculations. Should he reject or fail to reject the null hypothesis at the 5% level? Explain.
Student Projects (from Beth Chance)
Goal: To collect, describe, and analyze data to answer a question of your choice.
Teams: For the class projects, you will work in groups of 3-5. It is up to the members of the group to make sure everyone contributes equally. Teams should be formed by Sept. 12.
Topics: You are free to choose your own question. The question may related to your major or some other topic of interest. You should choose a topic so that it will be straight forward to gather the data. You also want to make sure the topic is interesting to you! Be creative! We will discuss some previous topics in class, and some previous project topics can be found on the course web pages (e.g., statweb.calpoly.edu/bchance/stat217/projects.html). You will want to collect lots of data and then narrow in on two hypothesis pairs later. After the topics are selected, most of the work for the projects will take place outside of class.
Project Reports: The goal of the project reports is to keep you thinking about the projects as the term progresses. Keep in mind that your project may change and evolve as the course progresses. Still, with each project report I would like to hear about your progress and ideas. Turn in one project report for each team, including team members names and previous project reports, preferably typed. Below are some guidelines on what I would like to see in each report.
The first project report is due Sept. 17. For this report you should identify your topic/questions of interest, the variables you plan to measure (list several quantitative and categorical variables), the population you plan to draw conclusions about, the sample and sampling frame you plan to use (if applicable), the type of study (e.g., survey or experiment), and two research questions you hope to focus on in the study.
The second project report is due Sept. 29. Your data collection techniques should be more clearly defined. If an experiment, give a tentative design. If a survey, give the preliminary questionnaire. You should indicate why this study is appropriate to answer your question and what precautions you will take (e.g. nonresponse, sampling bias, wording). Each team is to bring in a typed report (or online submission). These reports will be peer reviewed by other students in the course.
The third project report is due Nov. 3. You should have completed collecting your data. Include a description of your variables, their units, possible ranges/responses, as well as preliminary descriptive statistics and graphs and what information you would like to gain from your data analysis, that is, what questions do you want to answer.
The fourth project report is due Nov. 21. Identify the statistical tests you will be using and why you chose them. I will work with groups who want to use statistical techniques we have not covered in class yet.
Rough Draft (optional) If you turn in a rough draft by Dec. 1, I will review the paper, providing comments and suggestions.
Final Reports: Final reports are due Dec. 11, but can be turned in earlier. Reports must be typed. Turn in one report per group. Incorporate computer output into the body of the paper or as an appendix. Raw data does not itself need to be included except perhaps as an appendix. You may assume your audience will understand all statistical terminology.
Grading Criteria for Final Report:
10%: Quality of written report
20%: Design of survey/experiment - was data collection adequately explained, were the appropriate data collected to answer the questions and/or test the hypotheses and estimate parameters.
25%: Correctness of statistical analysis and assumptions checks
20%: Correctness of interpretations of the results of the statistical calculations and conclusions
Presentations: (other 25%) Each group will give a 10-15 minute presentation (at the most). Any number of group members may present part of the report. The presentation should not include extensive details, but provide the audience with an overview of what was done, what conclusions can be drawn, any drawbacks of the techniques and future recommendations. Feel free to be creative. The presentation should be done on PowerPoint, transparencies, or a poster board.
Checklist for final reports:
Title
Statement of purpose/question of interest (~2 lines)
Introduction
Including your initial motivation, references, and predictions. Motivate the readers, why should we be interested in your study?
Summary of data collection (no such thing as "too much detail" in this section!)
Provide all the details of your data collection process (enough that someone could replicate your study), including anything that went wrong or surprised you along the way, times,/dates, etc.
Identify the observational units, population of interest, variables measured, and type of study.
If you took a sample, describe the sampling frame, sampling method, and randomness.
Don't use the word random if don't really mean it.
If an experiment, describe how random assignment was used and any other controls.
If a survey, provide the response rate and copies of the exact questions used.
Discuss any measurement issues and operational definitions (e.g., how measure handedness).
Discuss how you tried to reduce bias and/or confounding (e.g., did you field test any of the questions on a test group to see if the wording was clear? was there a placebo treatment? what other controls did you exert?)
Identify possible remaining sources of bias/confounding and their possible effects on the results.
Analysis of Results
Descriptive Statistics: You will need to make choices as to which numerical and graphical summaries are most relevant. Interpret and discuss what you learn about your sample from the graphs and numerical summaries (e.g., shape, center, spread). Does this preliminary analysis support your predictions? Make sure you integrate the computer output into the body of the report and that all figures and graphs are clearly labeled. Also make sure sample sizes are clear.
Inferential Statistics: For at least 2 interesting questions, state and interpret both pairs of hypothesis about the population and/or define the population parameter(s) to be estimated through confidence interval(s). Remember to: state your hypotheses in symbols and in words (if a significance test); justify your choice of procedures, and comment on the validity of the methods (technical conditions). Embed the Minitab output of the analysis here. Provide suggestions if the technical conditions are not met. Perform appropriate follow-up analyses (e.g., multiple comparisons, cell contributions, sensitivity analysis). Be sure to clarify what you learn from the confidence intervals as well. State your conclusions in context. Is your decision about significance consistent with what you observed in the graphical and numerical summaries?
Interpretation/Explanation of Results
What does it all mean? Use the above descriptive statistics and inferential statistics to justify your interpretation (e.g., back up your conclusions with interpretations of the graphical summaries as well). Suggest reasons for what you've observed (e.g., why do you think these groups differ? or are not very different?). Pay particular attention to whether you can generalize your sample to a larger population and whether you can draw cause and effect conclusions (justify your answers).
Overall Conclusion, Recommendations, Future Questions
Summarize the results of your study and provide recommendations based on your analyses. (There can be some repetition with above.)
Review any potential problems with your study. Is there anything you would do differently next time? How might this affect the conclusions of the study? What similar questions might someone chose to investigate in the future to build on your results?
Grading Criteria for Final Report:
10 pts: Quality of written report
20 pts: Design of survey/experiment was data collection adequately explained, were the appropriate data collected to answer the questions posed. Points here will include originality of topic chosen and creativity.
25 pts: Correctness of statistical analysis and checks of technical conditions
20 pts: Appropriateness of interpretations of the results of the statistical calculations and conclusions (is it a cause and effect relationship? what is a reasonable population?)
Evaluation: Each person s grade will be 75% group grade and 25% individual grade. Individual grades will be determined by the instructor and team member evaluations.
More detailed grading rubric (You should consider these as rough guidelines. There may be slight adjustments when I read the collection of reports submitted this quarter.)
Quality of Written Report
|10 pts |Organization and structure are good, report is easy to follow and interesting to read. No grammatical or spelling errors. Graphs are|
| |well labeled and incorporated into body of report, along with output of statistical analyses and descriptive statistics. |
|9.5 pts |A few minor grammatical, spelling, and/or formatting errors. |
|9 pts |Missing statement of purpose or a fair number of grammatical, spelling errors. |
|8-8.5 pts |Report is not well structured. The discussion is difficult to follow. |
|7-7.5 pts |Graphs are not included in body of report. Structure of report is sloppy. No apparent proof reading. |
|< 7 pts |Even more problematic |
Design of study
|18-20 pts |Innovative topic with many precautions taken and details considered. Description of design is very complete (another person could |
| |follow the protocol to replicate the study). Design is appropriate for stated research question. |
|16-17.5 pts |Design is adequate and considerations of randomness were made, but may not be as innovative or extensive. Limitations of design |
| |are discussed. Discussion of design is missing some key details. |
|14-15.5 pts |Design is not appropriate for stated research question. Randomness was not used appropriately in the study. No creativity or |
| |originality has been demonstrated. |
|< 14 pts |Design is ill-considered and does not apply basic principles discussed in course. Numerous sources of bias and confounding are |
| |not recognized or addressed. |
Statistical Analysis
|22.5-25 pts |Descriptive and Inferential analyses are correct and complete for two distinct research questions. Technical conditions for each |
| |procedure are correctly stated and clearly checked. |
|20-22 pts |Two analyses are made but contain some technical errors. Or analyses are limited by simplistic nature of data. |
|17.5-19.5 |Insufficient output is provided. Technical conditions are not considered. Missing components of descriptive or inferential |
| |analyses. |
|< 17.5 pts |Project does not consider two separate research questions. Analyses are incomplete and/or inappropriate for data collected. Or |
| |report does not consider inferential analyses. |
Interpretations
|18-20 pts |Discussion and correct interpretations of both descriptive and inferential statistics are made in context and integrated. |
| |Interpretations follow from statistical analysis and consider issues from how the data were collected. Language is clear and |
| |precise but not overly technical. Discussion includes consideration of future research directions and suggestions for improvement.|
|16-17.5 pts |Interpretations are incomplete (e.g., no preliminary discussion of descriptive analysis and how relates to research conjectures, |
| |no interpretations of confidence intervals). Interpretations are vague or imprecise, but seem largely correct. Comments are not |
| |well integrated or not fully substantiated by analyses. Conclusions are made but not supported. Critique of study conducted is |
| |limited. |
|14-15.5 pts |Major errors in interpretation are made, demonstrating confusion. No consideration is made if technical conditions are not met. |
| |Inappropriate conclusions are drawn. Report does not critique study described. |
|< 14 pts |Commentary on analyses is missing. Conclusions are inconsistent and contradictory. |
Resources for Teaching Introductory Statistics
Websites
Consortium for the Advancement of Undergraduate Statistics Education (CAUSE):
Guidelines for Assessment and Instruction in Statistics Education (GAISE): education/gaise/
Inter-University Consortium for Political and Social Research (ICPSR):
icpsr.umich.edu/icpsrweb/ICPSR/
Gapminder: Unveiling the beauty of statistics for a fact-based world view:
Many Eyes (for shared visualization and discovery):
Rossman/Chance Applet Collection: applets/
Concepts of Statistical Inference: A Randomization-Based Curriculum:
Assessment Resources Tools for Improving Statistical Thinking:
Data and Story Library (DASL):
Journal of Statistics Education (JSE): publications/jse/
JSE Data Archive: publications/jse/jse_data_archive.html
Census At School – USA:
Texts and other resources
Workshop Statistics: Discovery with Data, 3rd ed., by Rossman and Chance, John Wiley & Sons.
Investigating Statistical Concepts, Applications, and Methods, by Chance and Rossman, Cengage.
Activity-Based Statistics, 2nd ed., by Scheaffer et. al., John Wiley & Sons.
Innovations in Teaching Statistics, ed. Joan Garfield, Mathematical Association of America.
Teaching Statistics: Resources for Undergraduate Instructors, ed. Thomas Moore, Mathematical Association of America.
Statistics: A Guide to the Unknown, ed. Roxy Peck et. al., Cengage Publishing.
Teaching Statistics: A Bag of Tricks, by Gelman and Nolan, Oxford University Press
Consortium for the Advancement of Undergraduate Statistics Education (CAUSE):
[pic]
Guidelines for Assessment and Instruction in Statistics Education (GAISE): education/gaise/
[pic]
[pic]
Inter-University Consortium for Political and Social Research (ICPSR):
icpsr.umich.edu/icpsrweb/ICPSR/
[pic]
Background Reading
For those new to the teaching of statistics or wishing to read more about it, we recommend the following articles and books:
Guidelines for teaching introductory statistics:
1. Guidelines for Assessment and Instruction in Statistics Education, reports (PreK-12 and College) endorsed by the American Statistical Association, available at education/gaise/.
2. “Teaching Statistics” by George Cobb, published in the MAA Notes volume Heeding the Call for Change, edited by Lynn Steen, 1992. This article summarizes and expounds on the recommendations of an ASA/MAA joint committee: 1) Teach statistical thinking; 2) More data and concepts, less theory and fewer recipes; 3) Foster active learning.
Reflections on what distinguishes statistical content and statistical thinking:
3. “Statistics, Mathematics, and Teaching” by George Cobb and David Moore, published in the November 1997 issue of The American Mathematical Monthly. This article highlights differences between the disciplines of mathematics and statistics, suggesting implications for the teaching of statistics.
4. “New Content and New Pedagogy: The Case of Statistics” by David Moore, published in 1997 by The International Statistical Review. This article summarizes recent trends in the teaching of statistics, organized around content, pedagogy, and technology. An extensive set of discussion papers is also provided. Moore’s article is available at .
5. “Statistical Thinking in Empirical Enquiry,” by Chris Wild and Maxine Pfannkuch, published with discussion in International Statistical Review in 1999, available at stat.auckland.ac.nz/~iase/publications/isr/99.Wild.Pfannkuch.pdf. This article investigates what statistical thinking is, offering some suggestions for helping students to develop it.
6. “Components of Statistical Thinking and Implications for Instruction and Assessment,” by Beth Chance, published in Journal of Statistics Education in 2002, available at publications/jse/v10n3/chance.html. Like the previous article, this one offers definitions of statistical thinking with even more emphasis on related teaching and assessment issues. See also companion articles on statistical literacy and statistical reasoning.
7. “What Educated Citizens Should Know About Statistics and Probability,” by Jessica Utts, published in The American Statistician in May 2003, available at anson.ucdavis.edu/~utts/AmerStat2003.pdf. This article provides a compelling list of what its title suggests, with implications for what should be taught in introductory courses.
8. “The Introductory Statistics Course: A Ptolemaic Curriculum?,” by George Cobb, also published in the first issue of Technology Innovations in Statistics Education, available at . This article argues for revising the introductory curriculum around the logic of statistical inference.
Educational research findings and suggestions related to teaching statistics:
9. “How Students Learn Statistics” by Joan Garfield, published in 1995 by International Statistical Review. This article summarizes educational research into how students learn statistics and discusses some implications of those findings for instructors. An updated version of this article, by Garfield and Dani Ben-Zvi, appears in the December 2007 issue of International Statistical Review.
10. Developing Students’ Statistical Reasoning: Connecting Research and Teaching Practice, by Joan Garfield and Dani Ben-Zvi, to be published by Kluwer Publishers. This book offers advice for teaching statistics, informed by findings in educational research.
“The Role of Technology in Improving Student Learning of Statistics,” by Beth Chance, Dani Ben-Zvi, Joan Garfield, and Elsa Medina, published in the first issue of Technology Innovations in Statistics Education, available at . This article (and chapter in the previous resources) provides an overview of effective uses of technology to improve student learning.
Collections of resources and ideas for teaching statistics:
11. Teaching Statistics: Resources for Undergraduate Instructors, edited by Tom Moore, published by the Mathematical Association of America and the American Statistical Association in 2001. This volume summarizes and offers information about a variety of resources for teaching statistics.
12. Innovations in Teaching Statistics, edited by Joan Garfield, also published in the MAA Notes series. This collection of articles offers examples and advice about how to implement innovative courses and pedagogy.
13. Teaching Statistics: A Bag of Tricks, by Deb Nolan and Andrew Gelman, published by Oxford University Press in 2002. This book presents many clever ideas for class demonstrations for teaching statistics.
14. “Sequencing Topics in Introductory Statistics: A Debate on What to Teach When” by Beth Chance and Allan Rossman, published in 2001 by The American Statistician, available at . This article offers different perspectives on the sequencing of topics and concludes with recommendations for course design.
15. “Teaching the Reasoning of Statistical Inference: A ‘Top Ten’ List” by Allan Rossman and Beth Chance, published in 1999 by College Mathematics Journal, available at . This article makes suggestions for how to help students develop their ability to reason through concepts of statistical inference.
Suggestions and resources for assessing student learning in statistics:
16. Assessment Resource Tools for Improving Statistical Thinking (ARTIST), a collection of online resources for assessing student learning, available at .
17. “Beyond Testing and Grading: Using Assessment to Improve Student Learning” by Joan Garfield, published in 1994 by Journal of Statistics Education (). This article provides a thorough survey of assessment techniques for introductory statistics.
18. “Experiences with Authentic Assessment Techniques in an Introductory Statistics Course” by Beth Chance, published in 1997 by Journal of Statistics Education (). This article offers advice for implementing a variety of assessment practices depending on learning goals and other considerations.
19. “Assessment on a Budget: Using Traditional Methods Imaginatively,” by Chris Wild, Chris Triggs, and Maxine Pfannkuch, a chapter in The Assessment Challenge in Statistics Education, available at . This chapter presents ideas for assessing student learning of statistical concepts without changing assessment formats.
20. Another article giving concrete suggestions on writing different kinds of questions written by Ruth Hubbard, “Assessment and the Process of Learning Statistics,” appears in the Journal of Statistics Education ().
Two books specific to Public Policy
Data Analysis for Politics and Policy, Edward Tufte, Prentice-Hall, Inc.
Statistics and Public Policy, Fairley and Mosteller
Also Mentioned:
Michael Brumer’s “Island”
See but not yet publically available?
-----------------------
The following five activities (2-6) focus on issues of data collection. They also focus on how introducing randomness into the design of a study has several important benefits and on how the scope of conclusions to be drawn from a study depends on how the data were collected.
This is a stand-alone activity that can be used very early in a course to introduce concepts and reasoning process of statistical inference.
The above can be used as a very quick in-class data collection assignment. The data can then be utilized at multiple points in the course.
4. 5. 6.
1. 2. 3.
The following five activities (7-11) focus on issues in describing data. Students need practice in considering how variables behave, as well as what graphs do and do not reveal.
The following three activities (12-14) focus on issues of randomness. Simulation is used extensively to help students understand properties of randomness.
The following activities (15-23) focus on principles of statistical inference: significance and confidence. Simulation can again be a useful tool for helping students understand the underlying reasoning.
[At this point, you can either estimate this probability with the Coin Tossing applet and/or applying the Central Limit Theorem. It is interesting to compare the results, especially when the CLT does not apply because of a small sample size.]
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related download
- teaching introductory statistics with activities and data
- em usa terms and definitions fema
- 10th grade sat vocabulary list 1 10
- 8th grade science vocabulary glossary
- identifying ivs dvs and study designs
- chapter 10 practicing speech wording
- ap psychology winston salem forsyth county schools
- sociology chapter 2 study guide
Related searches
- statistics on reading and success
- statistics on attitude and success
- sample introductory paragraphs with thesis
- teaching strategies objectives with examples
- coronavirus statistics in trinidad and tobago
- introductory statistics ppt
- introductory statistics course
- introductory statistics solutions
- introductory statistics online course
- statistics probability examples and solutions
- statistics with technology applications
- introductory meeting with new staff