Georgia radio host Mark Walters is suing OpenAI after its massively popular ChatGPT accused him of embezzlement in the precedent-setting case The Second Amendment Foundation v. Robert Ferguson. The catch? Walters is not named in that case, nor has he ever worked for the Second Amendment Foundation.
"OpenAI defamed my client and made up outrageous lies about him," Mark Walters' attorney John Monroe told Decrypt, adding that there was no choice but to file the complaint against the AI developer. "[ChatGPT] said [Walters] was the person in the lawsuit and he wasn't."
Documents filed in the Superior Court of Gwinnett County, Georgia, claim ChatGPT responded to an inquiry by journalist Fred Riehl, giving the chatbot a URL pointing to the SAF v. Ferguson case and asking for a summary. The chatbot erroneously named Mark Walters as the defendant, the complaint says.
OpenAI Wants to Stop AI from Hallucinating and Lying
OpenAI, the company behind ChatGPT, said Wednesday that it is improving the chatbot's mathematical problem-solving abilities with the goal of reducing AI hallucinations. "Mitigating hallucinations is a critical step towards building aligned AGI," OpenAI said in a post. The latest iteration of ChatGPT, GPT-4, launched in March, continuing to push artificial intelligence into the mainstream. But generative AI chatbots have historically had trouble with facts and spitting out false information—coll...
ChatGPT allegedly generated text saying the case “[i]s a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF," The text also claimed that Walters allegedly misappropriated funds for personal expenses.
Riehl reached out to Gottlieb about the response, who said the statement made by ChatGPT was false, the court document said.
Walters is demanding a jury trial, unspecified general and punitive damages, and attorney's fees.
While lawsuits against AI developers are still a new legal territory, Monroe is confident his client will win.
"We wouldn't have brought the case if we didn't think we were going to be successful," he said.
But others are not as confident.
ChatGPT Wrongly Accuses Law Professor of Sexual Assault
In the latest incident of an artificial intelligence “hallucinating,” Jonathan Turley, a U.S. criminal defense attorney and law professor, claimed that ChatGPT accused him of committing sexual assault. Worse, the AI made up and cited a Washington Post article to substantiate the claim. Turley wrote about the AI’s slanderous allegations in a USA Today opinion column, and on his blog. “I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassm...
"For most claims of defamation within the United States, you have to prove damages," Cal Evans, in-house counsel for Stonehouse Technology Group, told Decrypt.
"Although the suit references the 'hallucinations,' it is not an individual communicating facts; it is software that correlates and communicates information on the internet," Evans said.
AI hallucinations refer to instances when an AI generates untrue results not backed by real-world data. AI hallucinations can be false content, news, or information about people, events, or facts.
In its ChatGPT interface, OpenAI adds a disclaimer to the chatbot that reads, "ChatGPT may produce inaccurate information about people, places, or facts."
"It is possible that [OpenAI] can cite that they are not responsible for the content on their site," Evans said. "The information is taken from the public domain so already out in the public."

ChatGPT Creator OpenAI Accused of Violating Federal Trade Law
The Center for AI and Digital Policy filed a formal complaint with the U.S. Federal Trade Commission, accusing OpenAI—creator of the wildly popular ChatGPT—of violating section five of the FTC Act that targets deceptive and unfair practices. “The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices,” the center’s founder and president, Marc Rotenberg, said in a statement. “We believe that the FTC should look closely at OpenAI and GPT-4.” Not coincidenta...
In April, Jonathan Turley, a U.S. criminal defense attorney and law professor, claimed that ChatGPT accused him of committing sexual assault. Worse, the AI made up and cited a Washington Post article to substantiate the claim.
This "hallunication" episode was followed in May when Steven A. Schwartz, a lawyer in Mata v. Avianca Airlines, admitted to "consulting" the chatbot as a source when conducting research. The problem? The results ChatGPT provided Schwartz were all fabricated.
"That is the fault of the affiant, in not confirming the sources provided by ChatGPT of the legal opinions it provided," Schwartz wrote in the affidavit submitted to the court.
In May, OpenAI announced new training that the company hopes would deal with the chatbot's habit of hallucinating answers.
"Mitigating hallucinations is a critical step towards building aligned AGI," OpenAI said in a post.
OpenAI has not yet responded to Decrypt's request for comment.