Lawyers for former Trump attorney Michael Cohen have been ordered to produce copies of three legal cases cited in their motion to terminate his supervised release. The problem, the court said, is that the cases do not exist—leading some to speculate that AI was used to generate the motion or citations presented.

In a court filing on Tuesday, a federal judge warned that any submissions are considered a sworn declaration. The court’s motion said Cohen’s team must provide a “thorough explanation” of how the attorneys’ motion came to cite cases that do not exist and “what role, if any, Mr. Cohen played in drafting or reviewing the motion before it was filed.”

Cohen's attorneys “shall, no later than December 19, 2023, provide copies of the three cited decisions to the Court,” U.S. District Judge Jesse Furman wrote. “If he is unable to do so, Mr. Schwartz shall, by the same date, show cause in writing why he should not be sanctioned pursuant to Rule 11 of the Federal Rules of Civil Procedure of 1927, and the inherent power of the Court for citing non-existent cases to the Court.”

In 2018, Cohen was sentenced to three years in prison for crimes including campaign finance violations, tax evasion, and lying to Congress. Furman said any decision on Cohen’s motion would be held until his legal team can provide proof of the cases' existence.

AD

The incident is not the first such court challenge in the Big Apple.

“It seems Michael Cohen's lawyer or his AI cited non-existent cases in his bid to end supervised release early on, which Inner City Press has been reporting, as it did on an earlier SDNY misuse of AI case,” journalist and author Michael Lee wrote on Twitter.

While the New York court did not expressly mention AI in the filing, other legal cases that prompted the same question have already arisen this year. In May, Steven Swartz—a lawyer in Mata v. Avianca Airlines—admitted to “consulting” ChatGPT for research and included the AI’s responses in court documents. The problem is that ChatGPT's results were all fabricated due to AI hallucinations.

In June, a Georgia radio host, Mark Walters, filed a lawsuit against ChatGPT creator OpenAI after the chatbot accused him of embezzlement in The Second Amendment Foundation v. Robert Ferguson.

AD

"OpenAI defamed my client and made up outrageous lies about him," Mark Walters' attorney, John Monroe, told Decrypt, adding that there was no choice but to file the complaint against the AI developer. "[ChatGPT] said [Walters] was the person in the lawsuit, and he wasn't."

In October, attorneys for former Fugees member Pras Michel demanded a new trial alleging that his former legal team used artificial intelligence and the AI model hallucinated its responses, costing Michel the case.

“Don’t trust - verify! Attorneys: Stop using AI/ChatGPT without independently checking the details,” attorney and founder of Silver Key Strategies, Elizabeth Wharton, posted on Twitter.

OpenAI has stepped up efforts to combat AI hallucinations, even hiring so-called red teams to test its AI models for holes and vulnerabilities.

On Thursday, Fetch AI and SingularityNET announced a partnership to finally address AI hallucinations and the technology’s habit of producing inaccurate or irrelevant outputs using decentralized technology.

“SingularityNET has been working on a number of methods to address hallucinations in LLMs. The key theme across all of these is neural-symbolic integration. We have been focused on this since SingularityNET was founded in 2017,” SingularityNET Chief AGI Officer Alexey Potapov told Decrypt. “Our view is that LLMs can only go so far in their current form and are not sufficient to take us towards artificial general intelligence but are a potential distraction from the end goal.”

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.