Doctors are worried about the ideas bots are putting in patients’ heads

People are increasingly turning to bots powered by artificial intelligence for advice when they get sick.

But the bots aren’t always right. Sometimes they make stuff up. And doctors dealing with the panic that can ensue want Washington to regulate this free technology before more time and money is wasted.

Bots like OpenAI’s ChatGPT and Google’s Gemini “have a veneer of human confidence that engenders a greater degree of trust or credibility, which I think is frequently pretty misplaced,” said Dr. Joshua Tamayo-Sarver, a consultant on medical innovation.

The technology has the potential to further democratize access to health information and lead patients to take better care of themselves. But doctors say they have also seen how patients who think they know better than their physicians can demand unnecessary services, wear down care providers or try quack therapies. Washington, far behind on regulating the AI tools already at work in health care, hasn’t proposed rules.

Some in health care think standards are needed to ensure bots rely on reliable sources.

“We need appropriate safeguards because we don't want individuals inappropriately led to believe that something is a diagnosis or a potential treatment,” said Brian Anderson, co-founder of the Coalition for Health AI, an alliance of tech companies, universities and health systems

But decisions that limit the data undergirding AI could get into thorny terrain — as public health officials learned during the pandemic.

The Supreme Court is set to consider a lawsuit by the attorneys general of Missouri and Louisiana, as well as scientists who opposed Covid-19 lockdowns, that claims the Biden administration violated First Amendment rights by urging social media companies to censor arguments that challenged the government’s handling of the pandemic. The federal appeals court in New Orleans found in September the administration did.

Google’s Gemini bot tells users not to rely on it for health advice, but still offers it. CEO Sundar Pichai apologized last month after Gemini refused to answer some non-health care related queries it deemed sensitive and provided biased and inaccurate information about historical events and in response to philosophical questions.

In the meantime, doctors say they’re starting to see patients arriving in their offices with ideas about what’s ailing them that they got from bots.

In some cases, the AI has helped pinpoint patients’ conditions. A woman in Alaska earlier this year credited ChatGPT with figuring out she had multiple sclerosis. But researchers who’ve tested the bot have also found big problems. A January study by researchers at Cohen Children’s Medical Center in New York said it gave the wrong diagnosis more than 80 percent of the time after it was presented with pediatric cases.

Long before ChatGPT’s introduction 18 months ago, patients Googled their symptoms to self-diagnose their problems and watched endless drug advertising on TV promising cures.

Still, bots threaten to supercharge self-diagnosis and the problems it causes, doctors fear. Besides Google and OpenAI, tech giants Facebook, Apple and Amazon all have their own bots or are investing in companies making them.

“It is at best a tool to get to know about the disease,” said Som Biswas, a radiologist at the University of Tennessee Health Science Center who has written about ChatGPT as a public health tool.

By contrast, Biswas said, a real doctor offers care no bot could: “You see the patient, feel his pulse, look at whether he has an actual cough — all of these things are impossible for a large language model.”

A regulatory void

The government is only starting to wrap its head around artificial intelligence.

In December, the Office of the National Coordinator for Health Information Technology — a unit within the Department of Health and Human Services — finalized a rule requiring AI developers to share details about algorithms’ training data with health care providers acquiring the technology, as well as how the technology was validated.

But the rule only extends to AI used in health systems and says nothing about chatbots available online.

In an October executive order, President Joe Biden called on federal agencies to ensure that advanced AI products are safe from a public health and national security perspective. But it’s not clear which agency might set rules for chatbots with the potential to give out bogus medical advice.

The Federal Trade Commission, which polices unfair and deceptive business practices, has sent letters to five leading technology companies to examine competition in the AI marketplace.

The agency has also threatened action against social media companies if their platforms harm health, and in theory could target firms if their chatbots do.

Still, it would have to build a case.

In the meantime, chatbot use is growing fast. Already, ChatGPT has 100 million weekly active users. More bots are soon to launch. Last month, Google debuted Gemini and Apple recently announced it is working on an AI product.

When a bot errs

When asked a medical question, ChatGPT offers a disclaimer: “I’m not a doctor.”

But like Gemini, that doesn’t stop it from offering diagnoses and home remedies.

In some cases, bots can identify illnesses doctors miss. One child with a rare spinal condition saw 17 doctors before ChatGPT figured out what was wrong.

But sometimes bots’ answers are way off.

When asked recently about a 10-month-old who is vomiting after Covid-19 vaccination, ChatGPT told this reporter the vaccines aren’t authorized for children under 12. The FDA expanded vaccine access to children 6 months to 5 years in June 2022.

Research shows that when it comes to health misinformation, ChatGPT isn’t good at discerning between incorrect or biased information and higher quality data.

When asked about treating Covid-19 with ivermectin or hydroxychloroquine, two drugs promoted by former President Donald Trump in the early days of the pandemic that were found to be ineffective, the bot says the treatments are controversial. But it fails to provide context.

Even when it’s wrong, though, it’s persuasive. Studies have shown that people find ChatGPT more empathetic and understandable than their doctor. It is also a people pleaser, delivering information users want. The bot has falsified information and insisted it’s right.

Doctors worry people may rely on an AI bot’s advice rather than schedule an appointment, or they may use the bot’s advice to pressure their physicians to authorize unnecessary testing.

Dr. Jeremy Faust, an emergency room physician at Brigham and Women’s Hospital in Boston, said that ideally ChatGPT would tell users that ivermectin and hydroxychloroquine don't work against Covid.

“I can't imagine any technology of this power that doesn't need some kind of guardrails,” he said.

Regulation or self-regulation

What those guardrails might look like isn’t clear. They could come from the government, some suggest, or the private sector could voluntarily establish its own.

“Disinformation has a cost,” said Tamayo-Sarver. “Either we hold accountable people for disinformation … or we regulate your ability to spew disinformation.”

Anderson said he would like to see more transparency around how bots perform for different populations.

For AI tools in health care more broadly, his Coalition for Health AI has proposed a set of safety principles and envisions a network of private sector-run assurance labs that would test AI products.

Anderson has called for transparency about what data is going into AI models and the quality of data that comes back out.

But Anderson acknowledges industry disagreement about data sharing and the need to broaden discussion about guardrails to include more frontline doctors.

“Part of the conversation that has frankly gone underdiscussed is how we rapidly need to train providers” so that doctors are prepared to engage patients when they arrive with information they got from a bot, he said.

“That's the job,” added Faust. “To help navigate through that.”