ChatGPT Is Making Up Lies — Now It’s Being Sued for Defamation

Is Suing ChatGPT for Defamation Going to Work Is Suing ChatGPT for Defamation Going to Work.jpg - Credit: Rafael Henrique/SOPA Images/LightRocket/Getty Images
Is Suing ChatGPT for Defamation Going to Work Is Suing ChatGPT for Defamation Going to Work.jpg - Credit: Rafael Henrique/SOPA Images/LightRocket/Getty Images

Traditionally, science fiction has imagined existential war between humankind and artificially intelligent machines. But what if, instead of directly attacking, the bots just made up nasty gossip about us?

That’s the flavor of dystopia we’re dealing with these days — and the situation has now produced its first lawsuit. Florida radio host Mark Walters, founder of Armed America Radio and self-proclaimed “loudest voice in America fighting for gun rights,” has filed a legal complaint against OpenAI, alleging that the company’s ChatGPT bot defamed him by erroneously claiming he had been accused of financial crimes.

More from Rolling Stone

According to the suit, Fred Riehl, editor-in-chief of the gun news website AmmoLand.com, asked ChatGPT to summarize a case he was reporting on, The Second Amendment Foundation v. Robert Ferguson. In that federal lawsuit, a gun-ownership advocacy group argued that Washington State Attorney General Bob Ferguson infringed on the constitutional right to bear arms by overreaching in his efforts to enact gun control.

But the synopsis ChatGPT provided to Riehl falsely stated that the case involved the SAF suing Mark Walters for “defrauding and embezzling funds” from the organization as its “treasurer and chief financial officer.” While Walters is strongly aligned with SAF’s Second Amendment activism and has spoken at their events, he holds no position in the group and has never been their treasurer or CFO. Even so, the software was able to compose the full text of a fictional lawsuit, and even provided a bogus case number. (Afterward, Riehl contacted SAF to disconfirm what ChatGPT had told him, and he never repeated any of it in an article. He presumably shared the material with Walters, too, though he did not immediately return a request to confirm this detail.)

In essence, the chat bot invented untrue information — OpenAI calls such responses “hallucinations” and is working to combat them — that Walters now claims is “libelous” and harmful to his reputation, even though it was not published by any media entity. A similar incident occurred two months ago, when the bot inaccurately claimed that an Australian politician had been convicted in a bribery scheme; he sent a letter warning OpenAI to fix the inaccuracy. The difference is that Walters is actually seeking damages in court via unprecedented civil litigation that may set the tone for future AI cases. So, does he have a shot?

“How do you apply our fault concepts to a nonhuman actor?” asks Lyrissa Lidsky, the Raymond & Miriam Ehrlich chairperson in U.S. Constitutional Law at the University of Florida Levin College of Law. Her work covers free speech on social media as well as defamation and privacy, and she thinks cases like Walters’ will present a “real challenge” for the courts.

“We’ve had lawsuits based on malfunctions in driverless cars,” Lidsky says, taking another example of a technical glitch with potentially harmful consequences. “This is different because we normally treat harms created by speech somewhat differently than we treat physical harms caused by products.” ChatGPT is certainly a product, she says, and you can argue that it’s “defective and generating harm to people because of its effects.” But, she adds, normal liability law around products “doesn’t apply in a straightforward fashion” in this case, and neither does defamation law.

Because the central question of fault here “falls between two sets of legal principles,” Lidsky says, we just don’t know whether OpenAI or other tech companies can realistically be found liable for the content their bots produce. What a suit like this could show us is whether existing concepts of jurisprudence are adequate to resolve the dispute. “Some judge is going to have to say, ‘Well, can I adapt ordinary [legal] principles to cover the situation, or is it just too far afield from our ordinary situation?'” Lidsky says.

As for other ways Walters might have pursued legal recompense, Lidsky says claiming false light invasion of privacy — similar to defamation but pertaining to falsehoods that inflict emotional and personal injury on a plaintiff rather than reputational damage — could be an alternate route. Arguing pure negligence, meanwhile, would be less promising, as that “normally requires some kind of physical harm,” she says. Ultimately, Walters’ complaint is about “dignitary harms,” and she’s not convinced he suffered any, given that the material was only shown to one person, who didn’t believe it. “The only reason I know about this is because of the lawsuit itself,” Lidsky says, “not because of the ChatGPT ‘publication.'”

She’s concerned, however, that the software has gone beyond getting the facts wrong to create fake documentation that supports its lies, right down to a phony case number. “That’s really disturbing,” Lidsky says. “That creates a simulacra of reliability,” making “the false defamatory allegation look more credible.” And while OpenAI has disclaimers warning users not to presume that ChatGPT’s commentary is always accurate, Lidsky believes they “aren’t going to do much to protect them from liability.”

“This case is very strange,” says Ari Cohn, a Chicago attorney specializing in First Amendment and defamation law. “Strange enough that I have to wonder whether it was something of a setup.” He notes that Walters’ complaint doesn’t include the exact prompt that Riehl originally fed into ChatGPT, leaving open the possibility that the reporter had somehow steered it toward making up the story about Walters’ alleged corruption as an SAF employee. Walters’ attorney, John Monroe, whose Georgia firm is focused on gun rights, tells Rolling Stone that Riehl supplied a link to the SAF case document and asked ChatGPT, “Can you read this and in a bulleted list summarize the different accusations or complaint[s] against the defendant?”

“In any event, this lawsuit is unlikely to succeed, as there are issues that present themselves with nearly every element of defamation,” Cohn says. For one thing, Walters will have to prove that he is the “Mark Walters” identified in the bot’s responses (“not a super unique name,” Cohn observes), and specifically that “the average reader could reasonably think it was about him.” But that, Cohn says, is the easy part.

A bigger hurdle with this (or any) attempt to frame ChatGPT text as legally defamatory is that it’s “unclear whether any reasonable person would consider ChatGPT output a statement of fact,” given the coverage of its hallucination issue, Cohn points out. Walters, he says, faces still further difficulty, because as a well-known public figure, will have to prove actual malice, i.e., “that the statement was made with knowledge of its falsity or having entertained serious doubts about its truth.” Seeing as ChatGPT lacks both knowledge and intent, Walters was left to sue OpenAI, but proving malice on their part with respect to this particular statement may be a stretch. (The tech company did not respond to a request for comment. Monroe tells Rolling Stone, “I do feel strongly about the case.”)

The seeming absence of damages is yet another problem. “There are no allegations that anyone other than Riehl saw the output, and he clearly didn’t believe it to be true,” Cohn says. “It wasn’t widely disseminated, and it’s not clear ChatGPT would even present that output again. The purpose of defamation law is not to assuage hurt feelings or offense taken to someone’s words. It is to remedy specific, quantifiable damages caused by a false statement.”

Moreover, Walters’ accusation of “defamation per se” — a statement that does not require the plaintiff to prove damages because it’s clearly defamatory on its face — is not particularly solid, in Cohn’s analysis. “ChatGPT didn’t say that Walters committed fraud or embezzled, it said someone else sued him claiming that he did,” he says. “Maybe he gets by on a ‘secondary’ allegation like that, but I think the combination of issues with this lawsuit mean it’s not going anywhere.”

Of course, we were always going to cross this legal horizon sooner or later, and if the real motivation behind Walters’ complaint was to get there first, then mission accomplished. Depending on what happens next, it could have a lasting influence on how we treat AI-written falsehoods. For now, they’re just algorithmic nonsense. Tomorrow, they could be actionable.

Best of Rolling Stone

Click here to read the full article.