As the world gears up for a wave of democratic elections in 2023 and 2024, a growing concern looms on the horizon: the influence of foreign actors utilizing artificial intelligence (AI) and generative language models to manipulate public opinion. This new era of disinformation, with tools like ChatGPT and GPT-4 at its forefront, presents an unprecedented challenge to the integrity of elections across the globe.
The evolution of election interference
The world first witnessed the potential of election interference in 2016 when Russian actors unleashed social media disinformation campaigns targeting the US presidential election. Since then, countries like China and Iran have followed suit, using social media to sway elections not only in the United States but also in other nations. As 2023 and 2024 approach, there’s no reason to believe that this trend will subside; instead, it may intensify.
However, a new dimension has emerged – the integration of generative AI and large language models. These technologies enable the rapid creation of vast volumes of text on any subject, from any perspective, and in any tone. This innovation is tailor-made for propaganda campaigns in the Internet age, posing a significant threat to democratic processes.
The proliferation of AI-powered propaganda
The introduction of AI models like ChatGPT in November 2022, followed by the release of the even more powerful GPT-4 in March 2023, has ushered in an era of uncertainty regarding the impact of these technologies on disinformation. These AI systems drastically reduce the cost of generating and distributing propaganda, making it feasible for a broader spectrum of countries to engage in such activities.
In recent discussions among cybersecurity agencies in the United States, a troubling trend was identified: the rise of “domestic actors” in election interference. This development can be attributed to the reduced cost associated with AI-generated content production.
The challenges of disinformation campaigns
While content generation has become more accessible, the real challenge for propagandists remains distribution. The creation of fake accounts and their strategic use to boost content into the mainstream is crucial for the success of disinformation campaigns. Tech giants like Meta have made strides in identifying and removing such accounts. However, the landscape is constantly evolving.
In response to detection efforts, many propaganda outlets have shifted from Facebook to encrypted messaging platforms like Telegram and WhatsApp, where they are more challenging to trace. TikTok, a platform suitable for short, provocative videos, has also gained popularity among propagandists. AI further streamlines the production of such content, making it more appealing to the masses.
The rise of low-level propaganda at scale
Generative AI tools open new avenues for propaganda, such as the deployment of AI-powered personal accounts on social media. These accounts blend seamlessly into the online community, sharing mundane aspects of everyday life while sporadically injecting political content. Individually, these “persona bots” have minimal impact, but when deployed en masse, their influence becomes significant.
Countries like Russia and China, known for their sophisticated disinformation tactics, are likely exploring even more advanced strategies. These tactics will continue to evolve as AI technology advances.
The importance of fingerprinting and defense
Countries should be proactive in identifying and cataloging disinformation tactics powered by AI. Sharing information about these techniques and their effectiveness is essential to building robust defensive systems. Recognizing AI-produced propaganda in elections, as seen in Taiwan with deepfake audio recordings defaming candidates, is crucial to counteracting its effects.
In this rapidly evolving landscape, researchers play a pivotal role in safeguarding democracy. They must study foreign disinformation campaigns to better prepare their own nations against potential threats.
While some recent elections have managed to avoid significant disinformation issues, it is imperative to remain vigilant. The sooner we understand the capabilities and strategies of AI-enhanced disinformation, the better equipped we will be to confront these challenges head-on.
As we approach a critical period of democratic elections worldwide, it is clear that the threat of AI-driven disinformation is real and evolving. To protect the integrity of our democratic processes, governments, tech companies, and researchers must work collaboratively to identify, counteract, and ultimately mitigate the impact of AI-produced propaganda.