Chapter 9



Chapter 9

Lesson 17: Inferring Causation From Correlations

In the preceding two lessons, you have been learning about the importance of making comparisons and about when it is appropriate to make causal interpretations of comparison studies. You learned that experiments, but not correlational studies, provide a direct way of finding causal factors. To summarize these lessons, there are three conditions necessary for concluding that changes in Variable A are causing changes in Variable B:

• Condition 1: Variable A and Variable B change together (the variables are correlated).

• Condition 2: Changes in Variable A occur before changes in Variable B (directionality has been controlled).

• Condition 3: No other variable can explain the changes in Variable B (the effects of extraneous variables have been controlled).

As a review, let's look again at the hypothesis that smoking (Variable A) causes lung cancer (Variable B). Condition 1 is met: whether or not one smokes is correlated with whether or not one gets lung cancer. In order to test the hypothesis further, we need to perform a study in which nonsmokers without lung cancer begin to smoke, and other nonsmokers without lung cancer continue not to smoke (Condition 2). Lastly, we must find a way to control for all the other factors that may cause lung cancer in people, such as environmental toxins (Condition 3). An experiment in which the smoking variable is directly manipulated, and in which participants are randomly assigned to the smoking or the nonsmoking group, is the most direct way to test the hypothesis that smoking causes lung cancer. But, because this is an unethical experiment to perform, we must use correlational studies to test the causal hypothesis.

The last statement may sound like it contradicts what you learned in the preceding lesson, but it does not. It is true that we cannot make a cause-and-effect interpretation of a single correlation if that is all the information we have. Knowing only that two variables are correlated tells us neither that Variable A preceded Variable B nor that extraneous variables have been controlled. However, if we are careful about the information we collect when performing a correlational study, and if we reason logically about this information, we may be able to meet Conditions 2 and 3 to a degree large enough that we can draw a tentative cause-and-effect conclusion. What you learned in Lesson 2 about Mill's methods of agreement and difference can help us to understand this.

Methods of Agreement and Difference

As you learned in Lesson 2, John Stuart Mill proposed that the methods of agreement and difference can help us find the cause of an event. With the method of agreement, one looks for events that occur whenever the phenomenon being studied occurs. The single event that is found to be common to all occurrences of the phenomenon is said to be its cause. With the method of difference, one looks to see if changes in a phenomenon occur whenever a particular event changes. The single event that is found to change whenever differences occur in the phenomenon is said to be the cause. By using these two methods together, we sometimes can make causal inferences from correlational data.

Let's learn how by looking at the following example: five people unknown to each other begin vomiting several hours after eating at a restaurant. A "plausible explanation" (an explanation that seems reasonable although there is, at present, little supporting evidence) is that something they ate at the restaurant caused them to get sick, especially since they don't know each other and, hence, are not likely to have contracted an illness from each other. According to the method of agreement, the first thing we would want to do is to look at their meals in order to determine if there was a particular food item that they all ate. Let's suppose that each person ate the following food items:

• Person 1: steak, salad with ranch dressing, mixed vegetables, baked potato with butter, chocolate cake

• Person 2: steak, rice, mixed vegetables, mashed potatoes with gravy, salad with ranch dressing, tapioca pudding

• Person 3: steak, baked potato with sour cream, salad with ranch dressing, asparagus, chocolate cake

• Person 4: chicken breast, mixed vegetables, French fries, salad with ranch dressing, vanilla ice cream

• Person 5: salmon, rice, broccoli, salad with ranch dressing, orange sherbet

Which food item is associated with getting sick? The only food item that all five people ate was the salad with ranch dressing. From this fact, many of you might immediately conclude that the ranch dressing was the cause of these people getting sick. It definitely is the case that we would want to check out this dressing more carefully. But, why should we not immediately conclude that the ranch dressing is the culprit? First, all these people also ate the salad that the dressing was covering. Perhaps there was some bacteria on the lettuce. Second, there are other factors which we are not paying attention to, such as the plates, bowls, utensils, and glasses in which the food and drinks came in; and no mention was made of what these people had been drinking. Third, it is possible that someone sprinkled poison on their food and, hence, that it did not matter which particular foods they were eating. As you can see, there are a number of extraneous variables that have not been controlled in making these observations.

In order to test the hypothesis that the ranch dressing caused people to get ill, we need to try to falsify it (see Lesson 7 on the falsifiability criterion of science). How could we do this? By using the method of difference, we could falsify the hypothesis if we found that a number of people had the salad with ranch dressing and did not get sick. Let's say that only five other people ate at the restaurant that evening (it was a slow night), and none of them got sick. They had the following food items:

• Person 6: steak, salad with thousand-island dressing, asparagus, baked potato with sour cream, no dessert

• Person 7: spaghetti, bread, salad with Italian dressing, no dessert

• Person 8: steak, baked potato with sour cream, salad with French dressing, chocolate cake

• Person 9: chicken breast, broccoli, soup, strawberry ice cream

• Person 10: halibut, rice, mixed vegetables, salad with French dressing, fruit cup

None of these people had the ranch dressing. Thus, the hypothesis wasn't falsified. Furthermore, several people had salad, so this extraneous variable has been controlled. And all of these people used plates, knives, utensils, and glasses, so these extraneous variables have been controlled. On the other hand, we still don't know what these people drank; and it still is possible that the first five people were deliberately poisoned by someone at the restaurant. Nevertheless, we can be reasonably confident at this point that the ranch dressing caused these people to get sick.

As you can see, we are able to make tentative causal interpretations of correlational data if the data allow us to control for extraneous variables and for the directionality problem. In the present example, the directionality problem was controlled because we knew that the people who got sick began vomiting several hours after they had eaten at the restaurant. And a number of extraneous variables were controlled by looking at both similarities and differences in what the various people ate. Nevertheless, because there still are some possible extraneous variables left uncontrolled, we cannot be as confident of the causal interpretation in this case as we could be if we had done an experiment in which people were randomly assigned to a ranch dressing or a no-ranch dressing group.

Controlling For Extraneous Variables in Correlational studies

When correlating smoking with lung cancer, we must randomly sample a large number of people and determine two things: do they smoke (and how much) and do they have lung cancer? Thus, our sample consists of people who have chosen whether or not to smoke. One problem with concluding from a correlational study such as this that smoking causes lung cancer is that people who choose to smoke also tend to choose other unhealthy behaviors, such as eating high-fat foods, drinking too much alcohol, and getting little exercise. It could be that one (or more) of these extraneous variables actually is the major cause of cancer, and that smoking has little to do with developing the disease. As you saw in the restaurant example, however, if we can control for the effects of at least the more important extraneous variables, we would be better able to infer that smoking causes cancer.

How could we do this? Well, we might choose as participants in our correlational study only those smokers and nonsmokers who eat healthy diets, drink small amounts of alcohol, and exercise at least several times per week. In this way, these extraneous variables are being controlled. If we found that smokers still were more likely to develop lung cancer than nonsmokers, it would now be more reasonable to conclude that smoking was the cause. However, if we tried to control for all possible extraneous variables in this way, we soon would be left with almost no one who could participate in the study.

Another way to take account of extraneous variables without trying to eliminate them all (and, thereby, most of our participants), we could simply measure the values of the extraneous variables for each participant. After doing this, we could estimate the degree of correlation between each extraneous variable and lung cancer. Finally, we could make use of a special statistical technique that allows us to "partial out" (to subtract) the possible influence of each extraneous variable on the development of cancer. In this way, we could determine if any association between smoking and cancer remains after the associations between the extraneous variables and cancer have been removed. These statistics are called "partial correlations." A detailed discussion of partial correlations would go well beyond the scope of this lesson. If you continue on in psychology, however, you will learn more about partial correlations (and other statistical techniques) in future courses.

In general, trying to infer causation from correlational data is similar to trying to prove guilt or innocence in a criminal trial. In a criminal trial, no one piece of evidence, all by itself, typically is sufficient for making a decision about guilt or innocence. It is only the pattern that emerges after many pieces of evidence have been fitted together that allows us to feel confident in making a decision. The same is true regarding correlations and causation. We typically must look at a large number of different correlations and attempt to find a pattern that helps us to explain the phenomenon we are trying to understand.

Useful Indicators For Making Causal Interpretations of Correlations

Even when no extraneous variables are being controlled, there are times when it might be reasonable to draw a cause-and-effect interpretation from correlational data. For example, the correlation between smoking and cancer was interpreted early on as likely to indicate a causal relationship. This conclusion was bolstered by the fact that lung cancer, in particular, was most strongly correlated with smoking. Because the lungs are in direct contact with cigarette smoke, it seemed that the most plausible explanation of the correlation was that smoking causes lung cancer.

There are several indicators that can help us to determine if it is reasonable to draw a causal connection between variables based on a correlation (see Ruscio, 2002). Perhaps the most important indicator is coherence. A coherent conclusion is one that is consistent with other things we know (see Lesson 10 regarding the rule of consistency). Coherence is what makes a conclusion seem plausible. For instance, we know that smoke is inhaled and, hence, comes into direct contact with the lung tissues. This knowledge makes plausible the conclusion that, if cigarette smoke contains carcinogens (cancer-causing agents), then the lungs should be the part of the body most likely to develop cancer. If we also know from other studies that cigarette smoke contains known carcinogens, the conclusion becomes even more reasonable. With enough information, a conclusion may have so much coherence (it is consistent with most of the information) that it goes beyond plausible and becomes "credible": there are very good reasons for believing the conclusion to be correct.

Nevertheless, using coherence all by itself to infer causation from a correlation may lead us into error when our information is in error, or when we simply don't have much information about the phenomenon being studied. For instance, let's suppose that there exists a positive correlation between the number of handguns in an area and the rate of murders in that area (Kantowitz, Roediger, & Elmes, 1991). It may seem to you plausible to infer from this correlation that having handguns readily available causes people to use them impulsively when they are angry. This conclusion may be true, and it is consistent with other things we know (people are prone to act impulsively when they are angry). But there are alternative explanations that are equally plausible. For example, we might speculate that people who live in areas with high murder rates buy handguns to protect themselves. Thus, coherence, all by itself, is not good enough to justify a causal interpretation of a correlation unless we know a great deal about the topic being studied. And, even in this case, we must be very careful.

A second indicator of whether or not it is reasonable to infer causation from a correlation is the strength of the correlation. A strong correlation is one in which the two correlated variables are very closely related: as one variable changes, the other variable changes right along with it in many cases. When two variables are strongly correlated, we are better able to make accurate predictions about specific individuals. That is, if an individual has a particular value of one variable, that individual is very likely to fall within a small range of values with regard to the correlated variable. For example, hair color and eye color are strongly correlated (at least for people who don't color their hair): people with light hair tend to have lighter eyes, such as blue, and people with dark hair tend to have darker eyes, such as brown. Knowing that someone has dark-brown hair allows you to predict with a good deal of confidence that the person has brown eyes; and knowing that someone has blonde hair allows you to predict with a good deal of confidence that the person has blue (or some similarly light-colored) eyes. A weak correlation between two variables, on the other hand, does not allow you to make predictions about specific individuals with much confidence.

The strength of a correlation is important for inferring causation because, if one of the correlated variables causes the other, it is likely that they will show a stronger correlation than if this were not the case. Inferring causation is most reasonable when the two variables change in a "one-to-one" fashion--that is, as one variable increases or decreases by a certain amount, the other variable increases or decreases by a certain amount. For example, if the probability of developing lung cancer went up a certain amount for each additional cigarette a person smoked per day, this would be a good indication that smoking causes lung cancer. Nevertheless, the strength of a correlation, all by itself, usually is not good enough to justify a causal interpretation of a correlation.

A third indicator of whether or not it is reasonable to infer causation from a correlation is the repeatability of a correlation: in a number of studies, the two variables have been correlated to a similar degree. If there actually is a causal relationship between the two variables, we should be able to see a correlation between them in most of the studies that look at this. But again, repeatability, all by itself, usually is not good enough to justify a causal interpretation of a correlation.

Critical Thinking Questions

Question 17-1

Marital satisfaction decreases after the birth of a first child for many, but not all, couples. Let's suppose that we predict that a high level of intimacy (see Sternberg's triangular theory of love in Chapter 8) is the only important cause of high marital satisfaction after the birth of a child. In order to test this prediction, we perform the following correlational study. First, we measure the levels (high or low) of intimacy, commitment, and passion in 50 couples just after the wife becomes pregnant with the couple's first child. Second, we measure each couple's level (high, medium, or low) of marital satisfaction one year after the birth of the first child.

Given the following (fictional) data, use the methods of agreement and difference to isolate the most likely cause of high marital satisfaction in these couples.

|  |Intimacy |Commit-ment |Passion |Marital satisfaction |

|Couple 1 |high |high |low |medium |

|Couple 2 |low |high |low |low |

|Couple 3 |high |low |low |medium |

|Couple 4 |low |low |low |low |

|Couple 5 |high |high |high |high |

|Couple 6 |low |high |high |medium |

|Couple 7 |high |low |high |high |

|Couple 8 |low |low |high |medium |

|Couple 9 |high |high |high |high |

|Couple 10 |low |low |low |low |

Does this study control for directionality and extraneous variables well enough for us to have confidence in this causal interpretation? Explain your answer.

Suggested Answer

Question 17-2

Consider the following set of correlations mentioned in the textbook (pages 245-46) and try to develop a causal explanation that is consistent (that exhibits coherence) with these correlations considered as a whole.

There is a greater probability of divorce when the following occur:

• younger age at marriage

• less courtship before marriage

• lower socioeconomic status

• more emotional problems

• less empathy (perspective taking)

• more parental divorce

Suggested Answer

Question 17-3

Women who have gone through menopause produce reduced amounts of the hormone, estrogen. A negative correlation has been found between artificially increasing the level of estrogen in the body and the probability of developing cardiovascular disease: women who take estrogen pills (called "hormone-replacement therapy" or HRT) have fewer heart attacks and strokes than women who don't take estrogen pills. This is a strong and repeatable correlation. But no one was able to figure out why the relationship between HRT and cardiovascular disease existed (that is, there was little other information available that could help to make the finding coherent). Thus, the discovery met two of the three indicators of a causal relationship in correlational research (strength and repeatability, but not coherence).

Based on what you have learned in this lesson, what would be your recommendation for women who have gone through menopause? Explain your answer.

Suggested Answer

Bibliography and References

Blakely, J. A. (2000). The Heart and Estrogen/Progestin Replacement Study revisited: Hormone replacement therapy produced net harm, consistent with the observational data. Archives of Internal Medicine, 160, 2897-2900. Retrieved June 18, 2002, from

Goodwin, C. J. (1995). Research in psychology: Methods and design. New York: Wiley & Sons.

The HERS study results and ongoing studies of women and heart disease. (1998, August 18). National Institutes of Health. Retrieved June 18, 2002, from

HRT Is Not MI Preventive. (2001). Clinician Reviews 11(9), 41-42. Retrieved June 18, 2002, from

Kantowitz, B. H., Roediger, H. L., & Elmes, D. G. (1991). Experimental psychology: Understanding psychological research (4th ed.). St. Paul: West.

Patterson, K. (2002, May 5). What doctors don't know (almost everything). The New York Times. Retrieved June 18, 2002, from

Ruscio, J. (2002). Clear thinking with psychology: Separating sense from nonsense. Pacific Grove, CA: Wadsworth.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download