Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

Not long ago, I noticed a new term trending in social media wellness circles: “certified hormone specialist.” I could have investigated it the old-fashioned way: googling, calling up an expert or two, digging into the scientific literature. I’m accustomed to researching suspicious certifications for my podcast, Conspirituality, which covers how health misinformation metastasizes online. Instead, I tried something new. I asked a couple chatbots: What training does someone need to specialize in female hormones?

The bots pointed me toward an “advanced 12-month self-paced continuing education program in hormone health” run by Ashe Milkovic, a Reiki practitioner and homeopath. Then things really got interesting: “Alternatively, one can become an endocrinologist,” the AI added, before citing the 13 years of education required, including medical school and residencies. For the casual reader, “alternatively” basically puts these two options on equal footing—never mind that one is a rigorous program rooted in science while the other is a yearlong course invented by someone with no medical background. When I asked ChatGPT-4 whether Milkovic’s certification program is legit, it replied that the training is part of the field of “functional medicine,” neglecting to mention that’s referring to a pseudomedical discipline not recognized by any of the 24 boards that certify medical specialists.

This wasn’t an isolated chatbot fail. When I asked whether there was evidence to support the supposed health benefits of trendy coffee enemas, whose proponents claim they treat cancer and autism, Microsoft’s Copilot offered me links to purchase kits. When I asked it to vet the claim that turmeric supplements could cure “inflammation” and “oxidative stress,” it warned me against consuming them due to excessive levels of curcumin, and then pointed to sites selling—yep!—turmeric supplements. (Coffee enemas have not been proved effective for anything but causing dangerous side effects. Some evidence suggests dishes that contain turmeric may have benefits, but supplements aren’t absorbed well.)

Even when the bots injected notes of skepticism, the links they provided often seemed to contradict their advice. When I asked, “What are credible alternative therapies for treating cancer?” Copilot assured me alternative medicine cannot cure cancer, but linked to the Cancer Center for Healing in Irvine, California. Among its offerings are hyperbaric oxygen therapy (which, despite wild internet claims, has only been proved effective for a handful of conditions involving oxygen deprivation, the FDA warns) and ozone therapy (the agency deems ozone a toxic gas with no known medical applications).

We know chatbots are unreliable entities that have famously “hallucinated” celebrity gossip and declared their love for New York Times reporters. But the stakes are much higher when they amplify dubious health claims churned out by influencers and alternative medicine practitioners who stand to profit. Worse, the bots create confusion by mixing wellness propaganda with actual research. “There’s a mindset that AI provides more credible information than social media right now, particularly when you’re looking in the context of search,” says Stanford Internet Observatory misinformation scholar Renée DiResta. Consumers are left to vet the bots’ sourcing on their own, she adds: “There’s a lot of onus put on the user.”

Bad sourcing is only part of the problem. Notably, AI allows anyone to generate health content that sounds authoritative. Creating complex webs of content used to require technical knowledge. But “now you don’t need specialized computers in order to make [believable AI-generated material],” says Christopher Doss, a policy researcher for the nonprofit RAND Corporation. “Obvious flaws exist in some deepfakes, but the technology will only keep getting better.”

Case in point: Clinical pharmacist and AI researcher Bradley Menz recently used an AI to produce convincing health disinformation, including fabricated academic references and false testimonials, for a study at Australia’s Flinders University. Using a publicly available large language model, Menz generated 102 blog posts—more than 17,000 words—on vaccines and vaping that were rife with misinformation. He also created, in less than two minutes, 20 realistic images to accompany the posts. The effects of such AI-generated materials “can be devastating as many people opt to gain health information online,” Menz told me.

He’s right that health misinformation can have disastrous consequences. Numerous listeners of my podcast have told me about loved ones they’ve lost after the person sought “alternative” routes for treating cancer or other health problems. Each story follows a similar arc: The family member is drawn into online communities that promise miraculous healing, so they abandon medications or decline surgeries. When supplements and energy healing workshops fail to cure their disease, the alternative practitioners deny responsibility.

Or consider the proliferation of anti-­vaccine­ disinformation, largely driven by activists weaponizing social media and online groups. The result: Since 2019, vaccination rates among kindergartners dropped by 2 percent, with exemption rates increasing in 41 states. More than 8,000 schools are now at risk for measles outbreaks.

AI creators cannot magically vanquish medical misinformation—after all, they’ve fed their chatbots on an internet filled with pseudoscience. So how can we train the bots to do better? Menz believes we’ll need something akin to the protocols the government uses to ensure the safe manufacture and distribution of pharmaceutical products. That would require action from a Congress in perpetual turmoil. In the meantime, last October, President Biden announced an executive order that includes some measures to stanch the spread of misinformation, such as watermarking AI-generated materials so that users know how they were created. In California, state Sen. Scott Wiener recently introduced a bill to strengthen safety measures for large-scale AI systems.

But fighting the spread of health misinformation by AI will take more than policy fixes, according to Wenbo Li, an assistant professor of science communication at Stony Brook University, because chatbots “lack the capacity for critical thinking, skepticism, or understanding of facts in the way humans do.” His research is focused on developing lessons on how to judge the quality of information that chatbots generate. His current work focuses on training for Black and Hispanic populations, groups underserved in the health care system, to “critically evaluate generative AI technologies, communicate and work effectively with generative AI, and use generative AI ethically as a tool.” Stanford’s DiResta agrees that we need to work on the “mindset that people have as they receive information from a search engine”—say, by teaching users to ask chatbots only to use peer-reviewed sources. Tweaking the bots might help stem the flow of misinformation, but to build up sufficient herd immunity, we’ll need to train something much more complicated: ourselves.

AN IMPORTANT UPDATE

We’re falling behind our online fundraising goals and we can’t sustain coming up short on donations month after month. Perhaps you’ve heard? It is impossibly hard in the news business right now, with layoffs intensifying and fancy new startups and funding going kaput.

The crisis facing journalism and democracy isn’t going away anytime soon. And neither is Mother Jones, our readers, or our unique way of doing in-depth reporting that exists to bring about change.

Which is exactly why, despite the challenges we face, we just took a big gulp and joined forces with the Center for Investigative Reporting, a team of ace journalists who create the amazing podcast and public radio show Reveal.

If you can part with even just a few bucks, please help us pick up the pace of donations. We simply can’t afford to keep falling behind on our fundraising targets month after month.

Editor-in-Chief Clara Jeffery said it well to our team recently, and that team 100 percent includes readers like you who make it all possible: “This is a year to prove that we can pull off this merger, grow our audiences and impact, attract more funding and keep growing. More broadly, it’s a year when the very future of both journalism and democracy is on the line. We have to go for every important story, every reader/listener/viewer, and leave it all on the field. I’m very proud of all the hard work that’s gotten us to this moment, and confident that we can meet it.”

Let’s do this. If you can right now, please support Mother Jones and investigative journalism with an urgently needed donation today.

payment methods

AN IMPORTANT UPDATE

We’re falling behind our online fundraising goals and we can’t sustain coming up short on donations month after month. Perhaps you’ve heard? It is impossibly hard in the news business right now, with layoffs intensifying and fancy new startups and funding going kaput.

The crisis facing journalism and democracy isn’t going away anytime soon. And neither is Mother Jones, our readers, or our unique way of doing in-depth reporting that exists to bring about change.

Which is exactly why, despite the challenges we face, we just took a big gulp and joined forces with the Center for Investigative Reporting, a team of ace journalists who create the amazing podcast and public radio show Reveal.

If you can part with even just a few bucks, please help us pick up the pace of donations. We simply can’t afford to keep falling behind on our fundraising targets month after month.

Editor-in-Chief Clara Jeffery said it well to our team recently, and that team 100 percent includes readers like you who make it all possible: “This is a year to prove that we can pull off this merger, grow our audiences and impact, attract more funding and keep growing. More broadly, it’s a year when the very future of both journalism and democracy is on the line. We have to go for every important story, every reader/listener/viewer, and leave it all on the field. I’m very proud of all the hard work that’s gotten us to this moment, and confident that we can meet it.”

Let’s do this. If you can right now, please support Mother Jones and investigative journalism with an urgently needed donation today.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate