With U.S. election season well underway, China is increasing its use of artificial intelligence as part of broader efforts to interfere with American politics, according to Microsoft—meanwhile learning what divides citizens in order to leverage it to foment discord.
"CCP-affiliated actors have started to pose contentious questions on controversial U.S. domestic issues to better understand the key issues that divide U.S. voters,” Microsoft warns. The report suggests that China also uses social media to portray the US in an “unfavorable light.”
The report includes several screenshots of social media accounts asking for opinions on controversial topics, including AI-generated or manipulated images and employing tactics to garner reach and engagement on various platforms.
"There has been an increased use of Chinese AI-generated content in recent months, attempting to influence and sow division in the US and elsewhere on a range of topics," the report said. These tactics include written content as well as image and video deepfakes.
China is also meddling in local politics. Microsoft cites posts about a train derailment in Kentucky, the Maui wildfires, and immigration issues along the southern U.S. border. The accounts, which encourage people to comment with their opinions on major news events, are “Chinese sockpuppets,” the report explains.
While acknowledging "there is little evidence these efforts have been successful in swaying opinion," Microsoft cautions that China is likely improving its AI-powered propaganda operations over time.
"China’s increasing experimentation in augmenting memes, videos, and audio will likely continue—and may prove more effective down the line,” the report concludes.
Beyond China, Microsoft also called attention to online actions by North Korea.
"North Korea continued to prioritize the theft of cryptocurrency funds, conducting software supply-chain attacks and targeting their perceived national security adversaries," the report said.
AI as a geopolitical weapon
The growing role of big data and AI in elections has raised concerns about voter privacy, electoral integrity, and the potential for undue influence through targeted and personalized messaging—as political campaigns themselves have increasingly utilized data analytics to micro-target voters with tailored advertising and outreach based on detailed voter profiles.
For instance, the 2012 Obama campaign was lauded for its sophisticated data operations identifying and mobilizing supporters. Similarly, the 2016 Trump campaign leveraged data on 1.6 million volunteers to organize grassroots efforts.
There have also been controversies surrounding the misuse of voter data for AI-powered political research, such as the 2015 breach that allowed the campaign of Sen. Bernie Sanders to access Hillary Clinton campaign data and Cambridge Analytica's unauthorized harvesting of Facebook user data for targeted political ads.
Regulators are now rushing to establish rules and oversight for the use of AI in elections. Several U.S. states have introduced bills to regulate deepfakes and deceptive AI content, including efforts to require disclosure and labeling alongside a push by President Biden to tackle this issue. The European Union is also implementing its Artificial Intelligence Act, which it describes as the world's first comprehensive AI law that will include regulations around AI use in elections.
The most notable and polarizing effort, however, is likely the “Protecting Americans from Foreign Adversary Controlled Applications Act,” which seeks to ban TikTok in the U.S.
“The reason why TikTok is so successful, the reason why it’s so attractive, is because it knows you better than you know yourself, and the more you use it, the more it learns,” Senator Marco Rubio said during an annual Worldwide Threats Assessment hearing. “They happen to control a company that owns one of the world’s best artificial intelligence algorithms. It’s the one that’s used in this country by TikTok, and it uses the data of Americans to basically read your mind and predict what videos you want to see.”
However, some argue the most pressing threats come from the distribution of false and harmful content on social media platforms, rather than pure AI content creation. Tech companies have signed an accord to adopt measures against AI misuse, but when it comes to containing misinformation and determining who is responsible for posting it, social media regulation seems to be less clear —even compared to the AI space.