Microsoft and tech experts are sounding alarms about the potential misuse of artificial intelligence (AI) in democratic processes, particularly in national elections.
Generative AI, capable of producing highly convincing text, images, and videos, has become a tool for creating false content, including misinformation about public figures and key issues.
In a recent blog post, Microsoft highlighted China’s use of AI to influence US elections, citing incidents as evidence.
Microsoft’s Threat Analysis Center (MTAC) reported that individuals linked to the Chinese Communist Party (CCP) have been posing divisive questions on sensitive US issues to understand and exploit voter divisions.
The tech giant alleged that China has been amplifying these divisions by disseminating AI-generated content to sway public opinion on various contentious topics.
One such network, Storm 1376 or Spamouflage Dragonbridge, reportedly spread misinformation to sow discord:
- Claiming the Maui wildfires were caused by a US government “weather weapon” test in August 2023.
- Suggesting the US government orchestrated a train derailment in Kentucky, likening it to historic tragedies.
- Accusing the US of contaminating water supplies to assert control, part of a broader disinformation campaign.
In addition to US elections, the group allegedly targeted Taiwan’s presidential elections in January 2024:
- Posting AI-generated fake audio of candidate Terry Gou endorsing another candidate, which was swiftly removed by YouTube.
- Promoting AI-generated memes against Taiwanese dissidents.
With major elections in countries like India, South Korea, and the US this year, Microsoft warned that China may use AI-created content to further its interests globally.