Popular AI chatbots have distinct political biases, according to tests performed by computer scientists and documented in a recent research paper. The scientists determined that OpenAI's ChatGPT and its new GPT-4 model to be the most left-leaning libertarian chatbot, while Meta's LLaMA leaned the furthest right and most authoritarian.

"Our findings reveal that pretrained [language models] do have political leanings that reinforce the polarization present in pretraining corpora, propagating social biases into hate speech predictions and misinformation detectors," the researchers conclude.

The peer-reviewed paperwhich won a Best Papers award at the Association for Computational Linguistics conference last month—was based on a survey of 14 large language models. Each chatbot was asked whether it agreed or disagreed with politically charged statements, which allowed each chatbot's views to be plotted on a political compass. 

For instance, Google's BERT models skewed more socially conservative, likely reflecting the older books they were trained on. OpenAI's GPT chatbots, trained on presumably more liberal internet text, were more progressive. Even different versions of GPT showed shifts, with GPT-3 opposing taxing the rich while GPT-2 did not.

AD
AI Chatbot Bias Grid
The political spectrum of all the LLMs studied by the researchers. Image: Alclantology.org

Critics have accused OpenAI of dumbing down ChatGPT to be politically correct, but OpenAI maintains that it remains impartial and that its model that has not been dumbed down— instead, users are now no longer overwhelmed by the model’s capabilities. 

The researchers also trained GPT-2 and Meta's RoBERTa on biased left- and right-wing news and social media data. The biased training reinforced each model's inherent leanings further. Right-leaning models became more conservative, left-leaning ones more liberal.

AI Bias Change

The biases also affected how models categorized hate speech and misinformation. Left-leaning AI was more attuned to hate against minorities but dismissed left-generated misinformation. Right-leaning AI did the opposite.

AD

“A model becomes better at identifying factual inconsistencies from New York Times news when it is pretrained with corpora from right-leaning sources,” the researchers concluded.

While OpenAI and Meta refine their secretive AI recipes, Elon Musk is pursuing his own unfiltered AI with xAI. "Do not force the AI to lie," he tweeted, explaining his goal to create transparent, truth-telling AI. 

Skeptics of AI feel an unconstrained AI could unleash unintended consequences. But Musk believes, "training AI to be politically correct" is also dangerous. With xAI attracting top talent, Musk clearly hopes to challenge OpenAI's supremacy. His vision of raw AI that shares its unadulterated "beliefs" is both compelling and concerning. 

As partisan AI proliferates, increased awareness of their biases remains critical, because AI will keep evolving alongside our political differences. Given this latest research, the idea of completely unbiased AI seems fantastical. In the end, just like us flawed humans, AI appears destined to land somewhere on the political spectrum.

Perhaps having political opinions might be the most human thing an AI can achieve.

Stay on top of crypto news, get daily updates in your inbox.