In brief

  • Researchers warn that AI swarms could coordinate “influence campaigns” with limited human oversight.
  • Unlike traditional botnets, swarms can adapt their messaging and vary behavior.
  • The paper notes that existing platform safeguards may struggle to detect and contain these swarms.

The era of easily detectable botnets is coming to an end, according to a new report published in Science on Thursday. In the study, researchers warned that misinformation campaigns are shifting toward autonomous AI swarms that can imitate human behavior, adapt in real time, and require little human oversight, complicating efforts to detect and stop them.

Written by a consortium of researchers, including those from Oxford, Cambridge, UC Berkeley, NYU, and the Max Planck Institute, the paper describes a digital environment in which manipulation becomes harder to identify. Instead of short bursts tied to elections or politics, these AI campaigns can sustain a narrative over longer periods of time.

“In the hands of a government, such tools could suppress dissent or amplify incumbents,” the researchers wrote. “Therefore, the deployment of defensive AI can only be considered if governed by strict, transparent, and democratically accountable frameworks.”

A swarm is a group of autonomous AI agents that work together to solve problems or complete objectives more efficiently than a single system. The researchers said AI swarms build on existing weaknesses in social media platforms, where users are often insulated from opposing viewpoints.

“False news has been shown to spread faster and more broadly than true news, deepening fragmented realities and eroding shared factual baselines,” they wrote. “Recent evidence links engagement-optimized curation to polarization, with platform algorithms amplifying divisive content even at the expense of user satisfaction, further degrading the public sphere.”

That shift is already visible on major platforms, according to Sean Ren, a computer science professor at the University of Southern California and the CEO of Sahara AI, who said that AI-driven accounts are increasingly difficult to distinguish from ordinary users.

“I think stricter KYC, or account identity validation, would help a lot here,” Ren told Decrypt. “If it’s harder to create new accounts and easier to monitor spammers, it becomes much more difficult for agents to use large numbers of accounts for coordinated manipulation.”

Earlier influence campaigns depended largely on scale rather than subtlety, with thousands of accounts posting identical messages simultaneously, which made detection comparatively straightforward. In contrast, the study said, AI swarms exhibit “unprecedented autonomy, coordination, and scale.”

Ren said content moderation alone is unlikely to stop these systems. The problem, he said, is how platforms manage identity at scale. Stronger identity checks and limits on account creation, he said, could make coordinated behavior easier to detect, even when individual posts appear human.

“If the agent can only use a small number of accounts to post content, then it’s much easier to detect suspicious usage and ban those accounts,” he said.

No simple fix

The researchers concluded that there is no single solution to the problem, with potential options including improved detection of statistically anomalous coordination and greater transparency around automated activity, but say technical measures alone are unlikely to be sufficient.

According to Ren, financial incentives also remain a persistent driver of coordinated manipulation attacks, even as platforms introduce new technical safeguards.

“These agent swarms are usually controlled by teams or vendors who are getting monetary incentives from external parties or companies to do the coordinated manipulation,” he said. “Platforms should enforce stronger KYC and spam detection mechanisms to identify and filter out agent manipulated accounts.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.