OpenAI, the company behind ChatGPT, said Wednesday that it is improving the chatbot's mathematical problem-solving abilities with the goal of reducing AI hallucinations.

"Mitigating hallucinations is a critical step towards building aligned AGI," OpenAI said in a post.

The latest iteration of ChatGPT, GPT-4, launched in March, continuing to push artificial intelligence into the mainstream. But generative AI chatbots have historically had trouble with facts and spitting out false information—colloquially known as "hallucinations."

AI hallucinations refer to instances when an AI generates unexpected, untrue results not backed by real-world data. AI hallucinations can be false content, news, or information about people, events, or facts.

AD

OpenAI prominently warns users against blindly trusting ChatGPT, presenting a disclaimer that reads, "ChatGPT may produce inaccurate information about people, places, or facts.”

While OpenAI did not cite any specific examples that led to the latest research into hallucinations, two recent events illustrated the issue in real-world situations.

In April, Jonathan Turley, a U.S. criminal defense attorney and law professor, claimed that ChatGPT accused him of committing sexual assault. Worse, the AI made up and cited a Washington Post article to substantiate the claim.

Last week, Steven A. Schwartz, a lawyer in Mata v. Avianca Airlines, admitted to "consulting" the chatbot as a source when conducting research. The problem? The results ChatGPT provided Schwartz were all fabricated.

AD

"That is the fault of the affiant, in not confirming the sources provided by Chat GPT of the legal opinions it provided," Schwartz wrote in the affidavit submitted to the court, adding that he "greatly regrets" utilizing generative artificial intelligence to supplement the research. Schwartz swore to never do so again without absolute verification of its authenticity.

In February, technology giant Microsoft gave reporters a demonstration of Bing’s chatbot capabilities, including earnings reports, vacuum cleaner specifications, and travel plans. The results were less than stellar.

“I am shocked that the Bing team created this pre-recorded demo filled with inaccurate information, and confidently presented it to the world as if it were good,” AI researcher Dmitri Brereton who attended the event, said on Substack. “I am even more shocked that this trick worked, and everyone jumped on the Bing AI hype train without doing an ounce of due diligence.”

Despite these issues, Microsoft is betting big on ChatGPT, incorporating the technology into its Bing web browser after a $13 billion investment in OpenAI.

In its research, OpenAI compared “outcome supervision,” which provides feedback based on a final result, and “process supervision,” which provides feedback for each step in a chain of thought.

"We evaluate our process-supervised and outcome-supervised reward models using problems from the math test set," OpenAI said. "We generate many solutions for each problem and then pick the solution ranked the highest by each reward model."

The research team concluded that process supervision provided better performance since it encourages the model to follow a human-approved process, whereas outcome supervision is generally harder to scrutinize.

OpenAI acknowledged that it is unknown how results will pan out beyond mathematics but says future work must explore the impact of process supervision in other domains. The company released its full dataset of process supervision to encourage research.

AD

“If these results generalize, we may find that process supervision gives us the best of both worlds—a method that is both more performant and more aligned than outcome supervision,” OpenAI said.

OpenAI has not yet responded to Decrypt’s request for comment.

Stay on top of crypto news, get daily updates in your inbox.