In the 1960s, a six-year trial of a potential heart-disease drug was conducted in 19 hospitals across Scotland. Researchers gave 350 subjects with heart problems a drug containing the agent clofibrate; 367 got a placebo. For the most part, clofibrate proved to be statistically better at prolonging the subjects’ lives. But when it came to a subgroup of participants who had recently suffered a heart attack, clofibrate was only as good as the placebo. Normally, the mortality rate following a heart attack was four to nine percent per a year. But the placebo group’s was less than three percent.
It’s possible that the placebo group was just an unusual sampling of heart-attack patients with an above-average survival rate. This was the conclusion that the researchers published in 1971. But, as it turns out, the study’s placebo contained olive oil, which is now known to fight heart disease. It appears that this possibility never occurred to the researchers. However, since they published their placebo’s ingredients, others were able to question and examine their conclusion. Yet more often than not, researchers don’t disclose what’s in their placebos—making oversights like the Scottish researchers’ nearly impossible to catch.
In an article titled “What’s In Placebos: Who Knows?“, published this week in the Annals of Internal Medicine, researchers combed through more than 150 recent placebo-controlled trials from four medical journals. They discovered that in about a quarter of the trials were placebos’ ingredients disclosed. Just eight percent of trials using placebos in pill form (the majority of trials) revealed their ingredients.
The study’s lead researcher, Beatrice Golomb of the San Diego School of Medicine, has been investigating placebos since she learned more than a decade ago that the FDA has no standard for placebo disclosure. “The obvious potential for problems became clear,” she says.
There is no such thing as a true placebo—that is, a substance without any physiological effects. We generally associate placebos with sugar pills, but they are often more complicated. Ideally, placebos look and taste like the drugs they are being compared against. As Golomb explains in her paper, a trial for a drug with a fishy aftertaste would require a placebo with the same taste. Ingredients to alter flavor, color, and size are often used. Yet most placebo-based studies do not account for the possible effects of these ingredients in their findings; nor do the medical decisions based on those findings. Researchers might think a placebo has no effect at the time it is used, but—as with olive oil—learn later that it was skewing the test results. Additionally, it is difficult to replicate many drug studies because the placebos are never disclosed. Researchers do try to repeat trials, but if they use different placebos, says Golomb, “this could account for why we get different results.” Unfortunately, without any information on which placebos are used in which studies, determining the extent to which this has impacted the development of drugs on the market today is nearly impossible.
Even though the researchers in the 1971 Scottish study didn’t consider the effects of their olive-oil placebo, their oversight was eventually caught because they published their placebo ingredients—an act of scientific responsibility that still defies convention. Consider the scenario when an ineffective drug looks better because it is being compared to a placebo with negative effects: After Golomb published a letter that raised similar issues in Nature in 1995, she received a call from HIV researchers who told her of a drug study they were conducting that had to be aborted because the placebo group was “dropping like flies.” The placebo in that study contained lactose, and HIV patients are at an increased risk of being lactose intolerant.
It’s hard to say how often researchers may knowingly exploit a placebo’s side effects to make a drug appear more effective. But, Golomb adds, “All I can say is there is the potential for misuse.”