As the world embraces the potential of generative artificial intelligence (AI) to revolutionize industries and improve efficiency, a recent report from the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute has raised alarm bells about the unintended consequences of this powerful technology. While the public debate has predominantly centered on specific risks posed by automated systems, this report highlights that unknown and unintended harms could pose a significant threat to the national security of the United Kingdom.
The unforeseen dangers of generative AI
Titled “Generative AI and National Security: Risk Accelerator or Innovation Enabler?,” the report shines a light on the often-overlooked security risks associated with the widespread use of generative AI. While discussions have primarily focused on understanding threats from malicious actors, such as cyberattacks and the generation of harmful content, the report urges policymakers and stakeholders to be equally vigilant about the inadvertent consequences of improper use and over-reliance on AI outputs.
Amplifying threats: The dark side of generative AI
Generative AI has the potential to amplify the speed and scale of activities like cyberattacks and the creation of deceptive content. For instance, AI-powered phishing attacks can now produce more convincing communications, making it easier for cybercriminals to deceive their victims. These advancements in technology can inadvertently empower those who seek to exploit AI’s capabilities for harmful purposes.
One of the primary concerns highlighted in the report is the deployment of generative AI in critical national infrastructure and public services. The report warns that over-trusting AI outputs in these crucial domains could lead to unintended security vulnerabilities. Furthermore, it raises the issue of private sector experimentation with AI, emphasizing the risks associated with an unchecked pursuit of AI advances.
Expert opinions and recommendations
To compile the report, the research team consulted with over 50 experts from government, academia, civil society, and leading private sector companies. The consensus among these experts was that unintended harms of generative AI are not receiving adequate attention compared to adversarial threats that national security agencies are more accustomed to countering.
Ardi Janjeva, a research associate from CETaS at The Alan Turing Institute, emphasized that generative AI, while offering opportunities for the national security community, is currently too unreliable and susceptible to errors for the highest stakes contexts. Policymakers must adapt their thinking and operations to address the full spectrum of unintended harms arising from the improper use of generative AI.
The cumulative effect of misinformation
The report delves into the realm of political disinformation and electoral interference, raising particular concerns about the cumulative effect of various generative AI technologies working together to spread misinformation on a massive scale. It highlights the potential threat of deepfake videos, which can present realistic scenarios that are challenging to debunk in the lead-up to elections. For instance, an AI-generated video of a politician delivering a speech at an event they never attended can become more plausible when supported by accompanying audio, imagery, and text-based journalistic articles.
Combatting AI’s unintended consequences
The Alan Turing Institute has released this report to bolster the momentum generated by the UK’s AI Safety Summit, which brought together tech and political leaders to discuss the responsible implementation of artificial intelligence to prevent societal harm. The report offers policy recommendations for the newly established AI Safety Institute, as well as other government departments and agencies, to address both malicious and unintentional risks associated with generative AI.
These recommendations encompass guidelines for evaluating AI systems and promoting the responsible use of generative AI in intelligence analysis. Additionally, the report acknowledges that autonomous AI agents, often an early use case for the technology, could accelerate both opportunities and risks in the security environment, and it provides recommendations to ensure their safe and responsible deployment.
Generative AI holds immense promise for innovation and advancement in various sectors, but it also presents serious national security risks when not managed properly. The report from CETaS at The Alan Turing Institute underscores the need for policymakers, industry leaders, and the broader public to be vigilant about the unintended consequences of generative AI. As technology continues to evolve, proactive measures must be taken to mitigate these risks and ensure that generative AI is harnessed for the betterment of society rather than being exploited for harmful purposes. With elections on the horizon, the report serves as a timely reminder of the importance of responsible AI implementation to safeguard the integrity of democratic processes and national security.