Does coffee cause cancer—or help prevent it? What about red wine? These are some of the vital questions that scientists have long struggled to answer, with journalists by their side to misreport the findings.
For the media, scientific studies can be a great source of stories: Someone else does all the work on reaching a conclusion that appears to directly affect something your audience cares about (often their health). What’s more, that conclusion comes with a shiny gloss of indisputable factuality: “This isn’t just some made-up nonsense—it’s science! It must be true.” We’ve all seen how scientific conclusions that were carefully vetted by other scientists can be reduced or distorted beyond recognition for the sake of TV ratings or story clicks. I’m sure I’ve done it myself.
The systemic failure of science communication by mass media is the topic of John Oliver’s latest diatribe, and he really nails it. There are a variety of problems all mashed together:
- Journalists often don’t take the time, or have the skills, to actually read through, comprehend, and translate scientific findings that can be very technical. After all, scientific papers are written for other scientists, not for the general public, so it takes a certain amount of training and effort to unpack what they mean. But that’s, like, hard and boring, and it’s not as if your audience will know any better if you screw it up.
- Journalists like big, bold conclusions: “X Thing Cures Cancer!” Scientists don’t work like that. Most peer-reviewed papers focus on very narrow problems and wade far into the weeds of complicated scientific debates. That doesn’t mean studies are all too esoteric to be be useful (although some undoubtedly are). It means that scientists draw their overarching conclusions about the universe based on a broad reading of entire bodies of literature, not individual studies. Single studies rarely yield revolutions; instead, our understanding evolves slowly through tedious, piecemeal work. Scientists want to understand the forest; journalists often just want to show you, dear reader, this one REALLY AWESOME IMPORTANT tree they just found. Those conflicting interests can lead to misleading reporting.
- Not all studies are created equal; some contain a variety of inadequacies that should give you pause about the conclusions. But journalists often do a poor job of reporting on these inadequacies, either because they don’t do enough reporting to know the inadequacies exist or because reporting them would undermine the big, bold conclusion the reporter wants to tell you about. Some studies have extremely small samples sizes. Some relied on rats or monkeys or whatever, but the journalist doesn’t explain that the conclusion might not be the same for humans. Actual studies that were published in peer-reviewed journals are often given equal air time to “studies” that some activist/lobbying group/bozo in his garage threw together. Some studies lack important context or conflict with preexisting science—something that journalists often fail to point out.
All these failures lead to confusion and erode the public’s trust in scientists. As Oliver points out, bad reporting about scientific research on the health effects of smoking was a major tool of the tobacco industry in its fight against smoking regulations. The same kind of thing happens all the time now with climate change research. See, for example, the so-called global warming “hiatus.” Over the last couple of years there has been a healthy debate in the scientific community about whether global warming slowed down over the last decade, and if so, why. In part because of sloppy reporting, the debate was misrepresented by climate change deniers as evidence that global warming doesn’t exist at all—which was never what climate scientists were arguing. (That debate is ongoing; Scientific American has a good update on the latest.)
The important thing to remember is that any one individual study isn’t worth very much and can never really “prove” anything. It’s not as if Charles Darwin wrote one study about evolution and rested his case at that. It took years of additional research by other scientists to validate his theory. In fact, as Oliver notes, the intense public pressure for scientists to come up with big, bold discoveries actually undermines a very important step in the scientific method: reproducing the results of other scientists. Replicating someone else’s study is a good way to find out if the original was a fluke or a genuine finding. Recall the scandal from the fall when dozens of psychology papers were found to fail a reproducibility test, thus casting serious doubt on their conclusions. That kind of fact-checking doesn’t happen enough—a trend some observers have called a “crisis of credibility”.
As a general rule (one that I’m sure to have broken as much as anyone), journalists should avoid making too big a stink about individual studies, at least without serving them with a very large grain of salt. Kudos to Oliver for reminding us why that’s important.