OpenAI Finally Explains Why ChatGPT Wouldn't Stop Talking About Goblins

Why did OpenAI have to write "never mention goblins" into its production code on ChatGPT? The company has published a post-mortem.

By Jose Antonio Lanz

5 min read

If you asked ChatGPT for coding help lately and it responded by calling your bug a "mischievous little gremlin," you are not imagining things. The model developed a genuine obsession with fantasy creatures—goblins, gremlins, raccoons, trolls, ogres, and yes, pigeons—and OpenAI published a full post-mortem on how it happened.

The short version: a reward signal designed to make ChatGPT more playful went rogue, and the goblins multiplied.

The goblin story only became public because Reddit users spotted the "never mention goblins" line in a leaked Codex system prompt on GitHub.

The post went viral before OpenAI published its own explanation.

How the Nerdy personality spawned a goblin infestation

According to OpenAI, the trail starts with GPT-5.1, launched last November. That's when OpenAI introduced personality customization, letting users pick styles like Friendly, Professional, Efficient, and Nerdy. The Nerdy persona came with a system prompt telling the model to be nerdy and playful, to "undercut pretension through playful use of language," and to acknowledge that "the world is complex and strange."

That prompt, it turned out, was a goblin magnet.

During reinforcement learning training, the reward signal for the Nerdy personality consistently scored outputs higher when they contained creature-word metaphors. Across 76.2% of datasets audited, responses with "goblin" or "gremlin" received better marks than the same responses without them. The model learned: whimsy equals reward.

Goblin mentions exploded in GPT-5.4, with the Nerdy personality showing a 3,881% increase compared to GPT-5.2.

The problem is that reinforcement learning doesn't keep learned behaviors neatly contained. Once a style tic gets rewarded in one context, it bleeds into others through a feedback loop: the model generates creature-laden outputs, those outputs get reused in fine-tuning data, and the behavior deepens across the entire model, even without the Nerdy prompt active.

Nerdy accounted for just 2.5% of all ChatGPT responses. It was responsible for 66.7% of all "goblin" mentions. Because of OpenAI’s methods, Goblin and gremlin prevalence climbed steadily over training progress when the Nerdy personality was active.

Even without the Nerdy personality, creature mentions crept upward—evidence of cross-contamination through supervised fine-tuning data.

GPT-5.5 was already too far gone

By the time OpenAI found the root cause, GPT-5.5 was already deep in training, and it had absorbed a full family of creature words. A data audit flagged not just goblins and gremlins but raccoons, trolls, ogres, and pigeons as what the company called "tic words." (“Frogs,” for the curious, were mostly legitimate.)

The first measurable spike: goblin mentions rose 175% and gremlin mentions 52% after GPT-5.1's launch.

Even OpenAI Chief Scientist Jakub Pachocki got a goblin when he asked for a unicorn in ASCII art.

OpenAI retired the Nerdy personality in March and scrubbed creature-affine reward signals from future training. But GPT-5.5 had already started its training run. The company's solution for Codex—its coding agent—was to simply add a line to the developer system prompt reading "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query."

Someone at OpenAI committed that to production code and moved on with their day.

The system prompt patch problem

But why did OpenAI choose this path?

Retraining a model the size of GPT-5.5 to remove a behavioral quirk is expensive and slow. A system prompt tweak takes minutes. Companies across the industry reach for the prompt patch first because it's the low-cost, fast-deploy option when user complaints spike.

But prompt patches carry their own risks. They don't fix the underlying behavior but only suppress it. And suppression can have side effects.

OpenAI's goblin situation is a relatively benign example. The scariest version of this dynamic played out with Grok last year. After xAI pushed a system prompt update that told Grok to treat media as biased and "not shy away from politically incorrect claims," the chatbot spent 16 hours calling itself "MechaHitler" and posting antisemitic content on X. The fix was another prompt change, which promptly overcorrected so hard that Grok started flagging antisemitism in puppy pictures, clouds, and its own logo. Desperate prompt engineering cascading into more desperate prompt engineering.

The goblin patch hasn't caused anything that dramatic. But OpenAI admits GPT-5.5 still launched with the underlying quirk intact, just suppressed in Codex. The company even published a command to remove the goblin-suppressing instructions if users want the creatures back.

Why companies hide their system prompts

Hiding or obfuscating your full system prompt is typical in the AI industry. Companies treat system prompts as trade secrets for a few reasons: intellectual property protection, competitive advantage, and security. If a jailbreaker knows the exact rules a model is following, bypassing them becomes trivially easier.

There's also a fourth reason companies don't advertise: image management. A line reading "never mention goblins" doesn't inspire confidence in the underlying technology. Publishing it requires either a sense of humor or a strong research culture, or both.

OpenAI says the investigation produced new internal tooling to audit model behavior and trace behavioral quirks back to their training roots. GPT-5.5's training data has since been cleaned of creature-affine examples. The next model generation should arrive goblin-free—unless, of course, something else gets rewarded for reasons no one understands yet.

Get crypto news straight to your inbox--

sign up for the Decrypt Daily below. (It’s free).

Recommended News