As elections unfold worldwide, they face an evolving and formidable threat from foreign actors wielding a potent weapon: artificial intelligence. The realm of election influence entered a new phase in 2016 when Russian disinformation campaigns targeted the U.S. presidential election through social media. In the subsequent years, countries like China and Iran followed suit, using social media to influence elections in the U.S. and various parts of the world. As we approach 2023 and 2024, the threat remains, but with a new element: generative AI and large language models that have the potential to amplify the impact of disinformation.
The power of AI-driven propaganda
Generative AI tools, exemplified by ChatGPT and the more powerful GPT-4, can produce vast amounts of text rapidly and in various tones and perspectives. This makes them uniquely suited for internet-era propaganda. While the exact implications of these technologies on disinformation campaigns remain uncertain, the world is about to witness their influence.
Election season is about to sweep across the democratic world, with many countries holding national elections. These elections hold immense importance for countries previously engaged in social media influence operations. For example, China’s interests extend to Taiwan, Indonesia, India, and numerous African nations, while Russia monitors the U.K., Poland, Germany, and the European Union. The United States remains a focal point for various actors. As foreign influence costs decrease, more countries can join the arena, especially with tools like ChatGPT making propaganda production affordable.
Election interference and the changing landscape
At a recent conference attended by representatives from all U.S. cybersecurity agencies, discussions centered on expectations regarding election interference in 2024. Beyond the familiar players like Russia, China, and Iran, a new category emerged: “domestic actors.” This shift directly results from reduced costs associated with AI-driven disinformation campaigns.
Challenges beyond content generation
While AI facilitates content creation, distribution remains the pivotal challenge for propagandists. Fake accounts play a crucial role in posting and amplifying content until it goes viral. Companies like Meta have improved their ability to identify and take down such accounts, as demonstrated by a recent crackdown on a Chinese influence campaign. However, disinformation remains an arms race. As social media platforms change, propaganda outlets adapt, often shifting to messaging platforms like Telegram and WhatsApp, making detection and removal more challenging. Controlled by China, TikTok offers a platform for AI-generated short videos, further complicating the landscape.
Generative AI tools enable new production and distribution techniques, such as low-level propaganda at scale. Consider AI-powered personal accounts on social media that behave normally, posting about daily life and interests while occasionally injecting political content. These “persona bots” have minimal individual influence but can have a significant collective impact when deployed in large numbers.
Disinformation on AI steroids
The military entities behind election interference, such as those in Russia and China, are likely devising sophisticated tactics that go beyond 2016-era strategies. Before scaling up, these countries often test cyberattacks and information operations on smaller nations. Recognizing and cataloging these tactics early is crucial for countering new disinformation campaigns.
As AI-driven disinformation campaigns become more advanced, the U.S. must proactively fingerprint and identify AI-produced propaganda in other countries, such as Taiwan, where deepfake audio recordings have defamed political candidates. Without early detection and understanding, these campaigns may go unnoticed until they reach domestic shores.
The need for research and defense
While some recent elections have proceeded without significant AI-driven disinformation issues, early awareness and research are essential for preparedness. Researchers must study the techniques employed in foreign campaigns to develop effective defense strategies. The cybersecurity community recognizes that sharing attack methods and their effectiveness is crucial for building robust defense systems.
The threat of AI-powered disinformation in elections is growing, necessitating a proactive and informed approach. As we enter this new era of election interference, countries must stay vigilant, foster international collaboration, and develop defense mechanisms capable of countering AI-driven propaganda. The sooner we understand these evolving tactics, the better we can protect the integrity of our democratic processes.