By Jason Nelson
4 min read
While it's generally understood that generative AI models are prone to hallucinating—that is, making up facts and other information in defiance of reality—OpenAI’s flagship ChatGPT suffered from a particularly widespread bout of largely amusing incoherence on Tuesday.
“ChatGPT is apparently going off the rails right now, and no one can explain why,” Twitter user Sean McGuire wrote late Tuesday in a viral tweet collecting some of the stranger examples.
“Fired of the photo-setting waves, nestling product muy deeply as though a nanna under an admin-color sombreret," one garbled, typo-ridden response read. The user replied, "Are you having a stroke?"
Using ChatGPT 4.0, McGuire shared screenshots, including one of ChatGPT even acknowledging its error: “It seems that a technical hiccup in my previous message caused it to repeat and veer into a nonsensical section,” the chatbot said.
Reddit users on the r/ChatGPT subreddit also posted screenshots of the gibberish emanating from ChatGPT.
“Any idea what caused this?” Reddit user u/JustSquiggles posted, sharing what happened when they asked ChatGPT for a synonym for “overgrown.” The chatbot responded with a loop of, “a synonym for ‘overgrown’ is ‘overgrown‘ is ‘overgrown’ is,” more than 30 times before stopping.
In another example, Reddit user u/toreachtheapex showed ChatGPT responding with a loop of “and it is” until the response field was packed. The user would need to select “continue generating” or start a fresh conversation.
According to Reddit user u/Mr_Akihiro, the problem extended beyond text-based responses. When they prompted ChatGPT to generate an image of “a dog sitting on a field,” the ChatGPT image generator instead created an image of a cat sitting between what appeared to be a split image of a home and a field of grain.
The issue was so widespread that at 6:30 pm EST, OpenAI began looking into it. “We are investigating reports of unexpected responses from ChatGPT,” OpenAI’s status page said.
By 6:47 pm ET, OpenAI said it had identified the issue and was working to fix it.
“The issue has been identified and is being remediated now,” the OpenAI status report said, adding that the support team would continue monitoring the situation.
Finally, a network update at 11:14 am EST on Wednesday said that ChatGPT was back to “operating normally.”
The temporary glitch is a helpful reminder to users of AI tools that the models underpinning them can change without notice, turning a seemingly solid writing partner one day into a maniacal saboteur the next.
Hallucinations generated with large language models (LLMs) like ChatGPT are categorized into factual and faithful types. Factual hallucinations contradict real-world facts, like naming the first U.S. President.
Faithful hallucinations deviate from user instructions or context, leading to inaccuracies in areas like news or history—as in the case of U.S. criminal defense attorney and law professor Jonathan Turley, who in April 2023 was accused of sexual assault by ChatGPT.
On Wednesday night, OpenAI addressed the mass hallucinations.
“On February 20, 2024, an optimization to the user experience introduced a bug with how the model processes language,” OpenAI said in a postmortem shared with Decrypt. “LLMs generate responses by randomly sampling words based in part on probabilities. Their “language” consists of numbers that map to tokens.”
As OpenAI explained the error occurred during the number selection phase by the model, leading to the choice of incorrect numbers akin to a translation error. This resulted in nonsensical word sequences.
“More technically, inference kernels produced incorrect results when used in certain GPU configurations,” OpenAI said. “Upon identifying the cause of this incident, we rolled out a fix and confirmed that the incident was resolved.”
Edited by Andrew Hayward. This article was updated to include the report from OpenAI.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.