Hyped-up science erodes trust. Here’s how researchers can fight back.

In 2018, psychology PhD student William McAuliffe co-published a paper[1] in the prestigious journal Nature Human Behavior. The study’s conclusion — that people become less generous over time when they make decisions in an environment where they don’t know or interact with other people — was fairly nuanced.

But the university’s press department, perhaps in an attempt to make the study more attractive to news outlets, amped up the finding. The headline of the press release heralding the publication of the study read “Is big-city living eroding our nice instinct?[2]

From there, the study took on a new life as stories in the press appeared with headlines like “City life makes humans less kind to strangers[3].”

This interpretation wasn’t correct: The study was conducted in a lab, not a city. And it measured investing money, not overall kindness.

But what was most frustrating to McAuliffe was that the error originated with his own university’s press department. The university, he says, got the idea from a comment from one of the study authors “about how our results are consistent with an old idea that cities are less hospitable … than rural areas.”

It didn’t matter that the text of the press release got the study details right. “I even had radio stations call me, disappointed to learn that our study had not been a field study comparing behavior in cities to rural areas,” he says.

This story will be familiar to many scientists. This is a big, stubborn problem in how science gets communicated. It’s infuriating to researchers to see their work distorted and overhyped. It makes researchers distrust the media.

Frequently, stories about scientific research[4] declare an exciting new treatment works but fail to mention the intervention was performed on mice. Or stories that breathlessly report what the latest study finds on the health benefits (or risks) of coffee, without assessing the weight of the available evidence.

The truth is, a lot of these misconceptions, start as the one around McAuliffe’s study did: with the university press release[5]. But here, there is hope: Even though a lot of hyped-up science may start from university press releases, new research finds that press releases may be a powerful tool to inoculate reporters against hyped-up claims.

Many journalists just follow the lead of press releases

To be honest, the research on how scientific press releases translate to press coverage doesn’t make my profession look all that good. It suggests that we largely just repeat whatever we’re told from the press releases, for good or for bad. It’s concerning. If we can’t evaluate the claims of press releases, how can we evaluate the merits of studies (which aren’t immune to shoddy methods[6] and overhyped findings themselves)?

A 2014 correlational study found[7] that when press releases exaggerate findings, the news articles that follow are more likely to contain exaggerations as well. My colleague Julia Belluz wrote about it at the time. Basically, it found that a lot of science journalism just parroted the (bad) claims written about in press releases:

When a press release included actual health advice, 58 percent of the related news articles would do so too (even if the actual study did no such thing). When a press release confused correlation with causation, 81 percent of related news articles would. And when press releases made unwarranted inferences about animal studies, 86 percent of the journalistic coverage did, too.

The difference between what scientists report in the studies and what journalists report in their articles can look like a game of broken telephone. A study investigating the neural underpinnings of why shopping is joyful[8] get garbled into a piece about how your brain thinks shopping is as good as sex[9]. A study exploring how dogs intuit human emotions[10], becomes “Our dogs can read our minds[11].”

But here’s a key thing: Scientists and universities can ensure the first line in the telephone chain is loud and clear.

The 2014 study “was a real wake-up call,” Chris Chambers, a professor of cognitive neuroscience at Cardiff University and co-author on that paper, says. “We thought, okay, hang on a second, if exaggeration is originating in the press release, and if the press release is under the control of the scientist who signs it off, then it’s rather hypocritical for the scientist to then be turning around and saying, ‘Hey, reporter, you exaggerated my research,’ or, ‘You got it wrong.’”

Recently, Chambers and colleagues revisited the topic, with a new study published in BMC Medicine[12]. It too finds evidence that when university press releases are made clearer, more accurate, and free of hype, science news reported by journalists gets better as well.

What’s more, the study found that more accurate press releases don’t receive less coverage, but the coverage they do receive tends to be more accurate.

“The main message of our paper is that you can have more accurate press releases without reducing news uptake, and that’s good news,” says Chambers.

In the new paper, he and his team actually did an experiment: Working with nine universities press offices in the UK, the research team intercepted 312 press releases before they were sent out to the press. The press releases were randomized to receive an intervention or not, with the intervention consisting of Chambers’s team jumping in to make sure the content of the press releases accurately reflected the scientific study. For instance, they made sure the claims in the press release emphasized that the study was correlational and could not imply causation.

On Twitter, I asked scientists[13] to give me some examples of research results they felt were misinterpreted by the press. Here’s a particularly alarmist headline one researcher sent me, from the Daily Mail[14]: “Are smartphones making us STUPID? ‘Googling’ information is making us mentally lazy, study claims.”

Screenshot of Daily Mail

This seems concerning, right? Who wants to be made less intelligent by their smartphone?

But the study in the story was based on a correlational study. No causal claims can be made from it. The study authors even wrote[15] that “the results reported herein should be considered preliminary” in explaining the limitations of the work.

Yes, the Daily Mail ought to have read this section of the study. Still, where did it get the idea from in the first place?

1 2

Share