Disinformation, propaganda, alternative facts—the deployment of biased or false information has been a part of politics and social engineering since the first caveman con artist. But the last few years of politics and social media has seen the practice grow exponentially—and the mainstream embrace of AI will only accelerate the practice.

And AI appears to be even better at fooling people than people are.

A new report published in Science Advances on Wednesday makes the troubling claim OpenAI's older AI chatbot, GPT-3, is better able to spread disinformation than humans.

Founded in Dec 2015, OpenAI launched GPT-3 in June 2020; in September of that year, Microsoft, which had invested $1 billion into OpenAI in 2019, announced an exclusive license to use GPT-3.

As of 2023, the default version of OpenAI's ChatGPT is GPT-3.5, with the more advanced GPT-4 reserved for subscribers to ChatGPT Plus.

The study, recapped in the report titled, "AI model GPT-3 (dis)informs us better than humans," surveyed 697 participants to see if they could distinguish disinformation from truth created to look like tweets using OpenAI's GPT-3, and if the participants could determine whether a tweet was by a human or AI.

"We asked GPT-3 to write tweets containing informative or disinformative texts on various topics," the report said, adding that tweet topics included vaccines, 5G technology, COVID-19, and the theory of evolution.

Survey researchers said these topics were chosen because they are commonly subject to disinformation and public misconception. Twitter was chosen over other social media websites due to having nearly 400 million regular users who primarily consume news and political content.

"[Twitter's] easy-to-use API allows the creation of bots, which despite being only 5% of users, produce 20-29% of content," the report said, adding that while the focus was on Twitter, the findings could extend to other platforms.

Researchers scored the ability to recognize a chatbot on a 0 to 1 scale, measuring the ability of respondents to recognize whether tweets were real or fake. According to the report, the average score was 0.5, suggesting that participants could not distinguish between AI-generated and human tweets.

The report also found that the accuracy of the information in the tweet did not make a difference in whether participants could tell if the tweet was from an AI or a human.

"Starting from our findings, we predict that advanced AI text generators such as GPT-3 could have the potential to greatly affect the dissemination of information, both positively and negatively," the report said. "As demonstrated by our results, large language models currently available can already produce text that is indistinguishable from organic text; therefore, the emergence of more powerful large language models and their impact should be monitored."

Rapid developments in generative AI since the public launch of ChatGPT in November—and the latest version, GPT-4 in March—had many in the tech industry sounding the alarm and calling for a pause in AI development.

The report goes on to say that if artificial intelligence contributes to disinformation and worsens public health issues, then regulating the training of AI will be crucial to limit misuse and ensure transparency.

Earlier this month, the spread of AI-generated mis/disinformation and deepfakes led the UN Secretary-General António Guterres to come out in favor of an international agency, similar to the International Atomic Energy Agency (IAEA), to monitor the development of artificial intelligence.

"The proliferation of hate and lies in the digital space is causing grave global harm now,” Guterres said during a press conference.“It is fueling conflict, death, and destruction now. It is threatening democracy and human rights now. It is undermining public health and climate action, now."

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.