Observational studies: Does the language fit the evidence? Association vs. causation

association-and-causation-2

by Mark Zweig, MD, and Emily DeVoto, PhD, two people who have thought a lot about how reporters cover medical research

back to “Tips for Understanding Studies”[1]

A health writer’s first attempt at expressing results from a new observational study read, “Frequent fish consumption was associated with a 50% reduction in the relative risk of dying from a heart attack.” Her editor’s reaction? Slash. Too wordy, too passive. The editor’s rewrite? “Women who ate fish five times a week cut their risk of dying later from a heart attack by half.” This edit seems fair enough – or is it? The change did streamline the message, but with a not-so-obvious, unintended cost to the meaning. Was the subjects’ fish consumption really responsible for their dying less frequently from heart attacks? The new wording suggests that’s the case, but the original study does not support a conclusion of cause and effect.

Epidemiologic – or observational – studies examine the association between what’s known in epidemiologic jargon as an exposure (e.g., a food, something in the environment, or a behavior) and an outcome (often a disease or death). Because of all the other exposures occurring simultaneously in the complex lives of free-living humans that can never be completely accounted for, such studies cannot provide evidence of cause and effect; they can only provide evidence of some relationship (between exposure and outcome) that a stronger design could explore further. In other words, observational studies cannot distinguish direction–whether exposure A influences outcome B, or B influences A, or both are influenced by something else, even if that association may be strong and consistent. What other design could illuminate a causal nature and direction of the relationship, if present?

The only study design involving humans that does rise to the level of demonstrating cause and effect is a randomized trial. In this design, study subjects are assigned an exposure (or a control condition) at random, irrespective of any other exposures in their lives, and all such other exposures are then assumed to even out between the treated group and the control group of subjects (and this can be demonstrated). As a result, the only difference between the groups is whether they receive the exposure under study or the control condition. This approach is a true experiment. Any difference in outcome seen between the control and the experimental group should be due to the one factor or variable that differs.

Because observational studies are not randomized, they cannot control for all of the other inevitable, often unmeasurable, exposures or factors that may actually be causing the results. Thus, any “link” between cause and effect in observational studies is speculative at best.
In reporting on observational research, language here is crucial, because the audience may not be familiar enough with epidemiologic evidence and study design to appreciate the nuances.  To a general audience, language such as, “fish consumption is linked [or tied] to the risk of heart attacks” may sound causal even when a causal relationship is not warranted.

A subtle trap occurs in the transition from the cautious, nondirectional, noncausal, passive language that scientists use in reporting the results of observational studies to the active language favored in mass media. Active language is fine in general – who wants to write like a scientist? But problems can arise when the use of causal language is not justified by the study design.  For example, a description of an association (e.g., associated with reduced risk) can become, via a change to the active voice (reduces risk), an unwarranted description of cause and effect. There is a world of difference in meaning between saying “A was associated with increased B” and saying “A increased B.” The difference may seem subtle in terms of language, but is large in terms of meaning.

Indeed, in practice, a shift to causal language can occur at any stage: writing, editing, or headline composing, with similar effects on meaning. Without attention to the underlying design of studies, distortions of wording can creep in that could lead readers to overestimate the meaning of a given study and possibly even make life choices that the evidence does not warrant.

Another problem for journalists may arise in the language that scientists themselves and others use to describe the results of observational studies. Sometimes even scientists and press-release writers slide into causal language in expressing results of observational studies. You may find that language in a scientific publication is carefully chosen for the conclusion in the abstract or in the text, but not used so strictly in the discussion section. Thus, borrowing language from scientific papers warrants caution.

Study designResearchers’ version of resultsJournalist’s version of resultsProblemSuggested language
Prospective cohort study of dietary fat and age-related maculopathy (observational)A 40% reduction of incident early age-related maculopathy was associated with fish consumption at least once a week.Eating fish may help preserve eyesight in older people.Preserve and help are both active and causal; may help sounds like a caveat designed to convey uncertainty, but causality is still implied.“People who ate fish at least once a week were observed to have fewer cases of a certain type of eye problem.  However, a true experimental randomized trial would be required in order to attribute this to their fish consumption, rather than to some other factor in their lives. This was an observational study – not a trial.”
Prospective cohort study of the relationship between free-living activity energy expenditure and mortality in older adults (observational)Activity energy expenditure was strongly associated with lower risk of mortality in healthy older adults. For every 287 kcal/day in free-living activity energy expenditure, there is approximately a 30% lower risk of mortality.The authors calculated that participants who did 75 minutes a day of activities… lowered their risk of dying by 30%…Lowered their risk is causal; strongly associated with lower risk is not.“The researchers observed that people who used more energy in daily living had a lower risk of dying (within a certain time period). However, an observational study like this one can’t prove that using more energy in daily activity actually caused the lower risk of dying, because other factors may have played a role.”
Prospective cohort study of the relationship between coffee consumption and diabetes among postmenopausal women (observational)Compared with women who reported 0 cups of coffee per day, women who consumed 6 or more… had a 22% lower risk of diabetesOverall, those who drank the most [coffee] were 22 percent less likely to have diabetes, with decaf drinkers reaping somewhat greater benefit…22 percent less likely is correct; reaping greater benefit is causal.“Overall, those who drank the most coffee were 22 percent less likely to have diabetes. But, this type of study cannot prove that coffee drinking actually caused the lower chance of getting diabetes. A randomized trial is needed to show cause and effect.”
Prospective cohort study of fish intake and coronary heart disease in women (Nurses’ Health Study; observational)Among women, higher consumption of fish… is associated with a lower risk of coronary heart disease (CHD), particularly CHD deathsWomen who ate fish 5 times a week cut their risk of dying later from a heart attack by halfCut their risk of dying is causal.“Compared to women who rarely ate fish, those who ate fish regularly had less heart disease and related death. But, this type of study, which just observes people, rather than randomly assigning them to eat fish or not, cannot prove that fish consumption had a protective effect.”
Prospective cohort study of aspirin use and cancer incidence among U.S. men and women (observational)Long-term daily use of adult-strength aspirin may be associated with modestly reduced overall cancer incidenceHigher aspirin dose seems to stave off some cancers… The strongest effect was for colon cancer.Stave off is causal and active; effect is causal.  Seems to, used as a caveat, does not undo the implication of causality.“Because the study was based on observation rather than a true experiment, we still don’t know whether aspirin truly had a ‘protective effect’ against cancer.  A randomized trial would be needed to prove that causal link.”
Case-control study of alcohol use and risk of breast cancer (observational)Ever-use of alcohol over the past 20 years was associated with a 1.3-fold increased risk of breast cancer…drinking alcohol at any time in the previous 20 years increased breast cancer risk 30 percentIncreased was converted into an active, causal verb, though researchers had used it as an adjective in a noncausal statement“But readers shouldn’t jump to the conclusion that alcohol use increases breast cancer risk.  That’s a conclusion that such an observational study can’t reach.  Other factors in the women’s lives may have accounted for the risk.  Only a randomized clinical trial can establish a cause.”
Nested case-control study of the relationship between acid suppression and hip fractures in patients (observational)Long-term [acid suppression] therapy, particularly at high doses, is associated with an increased risk of hip fractureDrugs that suppress acids may make fractures more likely…Taking proton pump inhibitors for more than a year increased the likelihood of a hip fracture by 44 percentMake fractures more likely is causal, as is increased the likelihood; the caveat may does not undo the suggestion of causality“The study showed that people who took proton pump inhibitors for more than a year were 44 percent more likely to have a hip fracture. Such a conclusion would require a randomized trial that includes a control group who didn’t take the drugs. In this observational study some other factor might have increased fractures. That doesn’t mean that the statistical link (association) isn’t real; it just means a study like this can’t prove that the drugs were the culprits.”

News writers sometimes attempt to qualify results by using such words as “seems,” “may,” or “appears.” These words are intended to convey uncertainty, which is a healthy impulse when describing imperfect studies (i.e., most of them), but they still leave the reader with the idea that, however uncertain the results, the relationship between the exposure and the outcome is one of cause and effect.

Although much of our concern is with passive verbs that reporters convert to active, or with adjectives (“lower” risk) that reporters convert to verbs (“lowered” the risk), nouns that imply causation are another frequent problem. For example, “the protective effect,” “protection,” or “the benefit” often appears in reports about observational studies. We urge journalists to avoid such language. An alternative might be, “Persons who ate the most fish were observed to have fewer heart attacks. However to attribute this observation to fish consumption, rather than to some other factor in their lives, a randomized trial is required.”

1 2

Share