Both OpenAI and Meta this week revealed details about persistent nefarious campaigns by actors linked to China, Israel, Russia and Iran, which they’ve determined used their respective services to spread disinformation and disrupt politics in the U.S. and other countries.
In its latest quarterly threat report issued on Wednesday, Meta emphasized that generative AI is still easy to detect in such campaigns.
“So far, we have not seen novel GenAI-driven tactics that would impede our ability to disrupt the adversarial networks behind them,” the social media giant said.
While AI-generated photos are widely employed, Meta added that political deepfakes—a major global threat, according to many experts—are not common. “We have not seen threat actors use photo-realistic AI-generated media of politicians as a broader trend at this time,” the report notes.
For its part, OpenAI said that it built defenses into its AI models, collaborated with partners to share threat intelligence, and leveraged its own AI technology to detect and disrupt malicious activities.
"Our models are designed to impose friction on threat actors," the company reported yesterday. "We have built them with defense in mind."
Noting that its content safeguards proved successful with their models refusing to generate some of the requests, OpenAI said it banned the accounts associated with the identified campaigns and has shared relevant details with industry partners and law enforcement to facilitate further investigations.
OpenAI described covert influence operations as “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.” Its latest disclosures are described as being part of OpenAI's transparency efforts.
The company used the information gathered from these campaigns to dig deeper, evaluating how impactful the disinformation operations were and classifying their techniques to improve future countermeasures. On a scale of 1 to 6—the highest score representing campaigns that reached authentic audiences on multiple platforms—OpenAI said none of the identified actors scored higher than a 2.
According to OpenAI, at least five distinct campaigns used its models to generate text content that was then disseminated on social media platforms like Telegram, Twitter, Instagram, Facebook, and online forums, as well as other websites. Meta, meanwhile, reported on AI generated content alongside other groups it flagged for “coordinated inauthentic behavior.”
Here are some of the specific campaigns called out by both companies.
The Russian threat
One Russian campaign dubbed "Bad Grammar" enlisted OpenAI's systems to generate comments in multiple languages that were posted on Telegram, targeting audiences in Russia, Ukraine, the United States, Moldova, and the Baltic states. The comments touched on topics like Russia's invasion of Ukraine, politics, and current events.
“The network primarily commented on posts by a small number of Telegram channels,” OpenAI says. “The most-mentioned was the pro-Russia channel @Slavyangrad, followed by the English-language @police_frequency and @SGTNewsNetwork.”
Another persistent Russian operation called "Doppelganger" used ChatGPT to generate website articles, social media posts, and comments overwhelmingly portraying Russia in a positive light while denigrating Ukraine, the United States, and NATO. This content was intended to increase engagement and productivity across platforms like 9GAG.
Doppelganger also attempted to use OpenAI tools to create artificial images with captions critical of Western governments, but the company said its system refused requests that appeared to be disinformation propaganda.
Meta also mentioned the group in its Adversarial Threat Report, focused on its attempted infiltration of Meta’s social media platforms through various topics. The challenge, Meta noted, was that the group shifts its tactics frequently, evolving over time.
Misinformation from Israel
An Israeli private firm named STOIC launched an operation dubbed "Zero Zeno" by OpenAI, which tapped its AI models to generate comments. Zero Zeno incorporated the remarks into broader disinformation tactics targeting Europe and North America.
“Zero Zeno posted short texts on specific themes, especially the Gaza conflict, on Instagram and X. These texts were generated using our models,” OpenAI revealed. “A further set of accounts on those platforms would then reply with comments that were also generated by this operation.”
“Open-source research in February described this network criticizing the United Nations relief agency in Palestine,” the report noted, linking to a more extensive report.
OpenAI technology was also tapped by Zero Zeno to create fake bios and contribute to fake engagement. OpenAI also revealed that the Israeli firm used its technology to target the “Histadrut trade unions organization in Israel and the Indian elections.”
This group was also flagged by Meta.
“This network’s accounts posed as locals in the countries they targeted, including as Jewish students, African Americans, and ‘concerned’ citizens,” Meta said, “They posted primarily in English about the Israel-Hamas war, including calls for the release of hostages; praise for Israel’s military actions; criticism of campus antisemitism, the United Nations Relief and Works Agency (UNRWA), and Muslims claiming that ‘radical Islam’ poses a threat to liberal values in Canada.”
Meta said banned the group and issued a cease and desist letter to STOIC.
Chinese “Spamouflage” efforts
China's “Spamouflage” campaign made use of OpenAI's language models for tasks like debugging code and generating comments in various languages to spread narratives under the pretense of developing productivity software.
“Spamouflage posted short comments on X criticizing Chinese dissident Cai Xia [in] the form of an initial post and a series of replies, ” OpenAI says. “Every comment in the ‘conversation’ was artificially generated using our models, likely to create the false impression that real people had engaged with the operation’s content.”
In the case of anti-Ukraine campaigns, however, the comments and posts generated via OpenAI on 9GAG appear to have drawn extremely negative reactions and criticism from users, who denounced the activity as fake and inauthentic.
Another AI misinformation campaign with links to China was detected by Meta. “They posted primarily in English and Hindi about news and current events, including images likely manipulated by photo editing tools or generated by artificial intelligence.”
The network users posted negative comments about the Indian government, for example, and covered similar topics like the Sikh community, the Khalistan movement, and the assassination of Hardeep Singh Nijja.
Iranian operation
A longstanding Iranian operation known as the International Union of Virtual Media (IUVM) was identified abusing OpenAI's text generation capabilities to create multilingual posts and imagery that supported pro-Iran, anti-U.S., and anti-Israel viewpoints and narratives.
“This campaign targeted global audiences and focused on content generation in English and French—it used our models to generate and proofread articles, headlines, and website tags,” OpenAI said. The content would be subsequently published and promoted on pro-Iran websites and across social media as part of a broader disinformation campaign.
Neither Meta nor OpenAI responded to a request for comments by Decrypt.
Edited by Ryan Ozawa and Andrew Hayward