The European Union’s cybersecurity agency, ENISA, has issued a stark warning about the upcoming European Parliament election in 2024. In its annual Threat Landscape report, ENISA has highlighted the growing risk posed by artificial intelligence (AI) technologies, particularly AI chatbots and deepfake images and videos, to the integrity of the election process.
Surge in AI-powered disinformation
ENISA’s report points to a significant surge in the use of AI chatbots, deepfakes, and similar technologies for disinformation campaigns. These AI-driven tools can craft convincing phishing emails and messages that appear to be from legitimate sources, as well as create audio and video deepfakes that can deceive even the most discerning viewers.
Urgent call for vigilance
In response to these emerging threats, ENISA is calling on governments, the private sector, and the media to be on high alert in the lead-up to the European Parliament election next year. The agency emphasizes the importance of spotting, debunking, and countering AI-generated disinformation online to safeguard the trust in the EU electoral process.
Ransomware and DDoS attacks dominate the cyber threat landscape
The Threat Landscape report by ENISA reveals that ransomware and Distributed Denial of Service (DDoS) attacks remain the top cybersecurity threats, accounting for over half of all recorded incidents. These attacks have reached an unprecedented level in the European Union, with devastating consequences for organizations and individuals.
Global concerns over AI-generated disinformation
The proliferation of AI-generated disinformation is not limited to the European Union. Democratic countries worldwide, including the United States, the United Kingdom, and the Netherlands, are gearing up for elections in the coming months. The use of generative AI technology is raising concerns about the potential to further polarize political debates, amplify foreign influence, and spread fake news.
Three ways hackers exploit AI
ENISA has identified three key ways in which hackers are leveraging artificial intelligence to deceive the public:
Crafting convincing phishing emails: Hackers use AI to create phishing emails and messages that closely resemble legitimate sources, making it challenging for individuals to discern the authenticity of these communications.
Creating deceptive deepfakes: AI technology is harnessed to produce audio and video deepfakes that convincingly mimic real voices and appearances, increasing the risk of misinformation campaigns.
Data mining for malicious purposes: Cybercriminals employ AI to mine data for malicious purposes, potentially compromising individuals’ personal information and sensitive data.
Accessibility of AI tools
Henry Adjer, a visiting researcher at the University of Cambridge specializing in deepfakes, emphasizes the growing accessibility of AI applications. Previously, such technologies were prohibitively expensive or difficult to access, but they are now readily available in consumer-facing apps and websites, often at little to no cost.
“Unprecedented surge” in cyberattacks
ENISA’s report underscores an “unprecedented surge” in cyberattacks, with a focus on distributed denial-of-service attacks aimed at taking down websites and ransomware attacks targeting organizations to steal sensitive data and demand ransoms.
Public administrations and healthcare sectors top the list of targets for cyberattacks, comprising 19% and 8% of all recorded incidents, respectively. Additionally, the report notes a significant increase in social engineering attacks, where hackers assume false personas to gain trust and access to sensitive networks and data.
Geopolitical motivations and extortion operations
The report also highlights that cybercriminals have increasingly shifted their focus to cloud infrastructures and have geopolitical motivations in 2023. They have expanded their extortion operations, not only through ransomware but also by directly targeting users.
The European Union faces a critical challenge in safeguarding the integrity of the 2024 European Parliament election against the growing threat of AI-driven disinformation. As AI technologies become more accessible and sophisticated, governments, businesses, and the media must collaborate to develop robust strategies to detect and combat these emerging threats to ensure the trustworthiness of the electoral process. With the global landscape of elections evolving, the fight against AI-generated disinformation is a concern shared by democracies worldwide.