The UK’s National Cyber Security Centre (NCSC) has issued a warning about the potential for AI-driven malware to become a significant threat by 2025. In a recent report, the NCSC, an agency under the Government Communications Headquarters (GCHQ), stated that there is a “realistic possibility” that highly capable state actors could use AI to generate malware that evades current security filters. This development could usher in a new era of cyber threats, affecting everything from vulnerable device discovery to data analysis, and even social engineering attacks.
AI-generated malware on the horizon
The NCSC report highlights that AI has the potential to produce malware that can bypass existing security measures, but it requires high-quality exploit data for training. The agency believes that some highly capable state entities may possess repositories of malware extensive enough to effectively train AI models for this purpose. While the most advanced AI-driven cyberattacks are expected to emerge in 2026 or later, early adopters of generative AI tools will likely be the most capable attackers.
AI’s impact on vulnerable devices and data analysis
The NCSC predicts that AI will make it easier for attackers to discover vulnerable devices, reducing the time available for defenders to patch them before exploitation. Moreover, AI will enhance real-time data analysis, allowing attackers to identify valuable files swiftly, increasing the efficiency of disruptive, extortion, and espionage efforts.
The report notes that expertise, equipment, financial resources, and access to quality data are essential for harnessing advanced AI in cyber operations. Highly capable state actors are best positioned to leverage AI’s potential in sophisticated cyberattacks. However, even attackers with limited skills and resources are expected to benefit from AI advancements in the next four years.
AI empowering cyber criminals
At the lower end of the spectrum, cybercriminals employing social engineering attacks are anticipated to leverage consumer-grade generative AI tools, such as ChatGPT, Google Bard, and Microsoft Copilot. This could result in more convincing and localized phishing attempts. Additionally, ransomware gangs may use AI for data analysis, allowing for more effective data extortion attempts by identifying valuable data more easily.
Challenges for cyber security practitioners
The NCSC predicts that AI-driven cyberattacks will increase in volume and impact over the next two years, intensifying the challenges faced by cybersecurity practitioners. These practitioners are already grappling with evolving threats, and AI is expected to play a significant role in shaping the threat landscape.
The NCSC is closely monitoring the development of AI in cyber threats. The agency’s annual CYBERUK conference in May will focus on emerging technology’s considerable threat to national security. Outgoing CEO Lindy Cameron emphasized the need to manage AI’s risks while harnessing its potential for responsible development.
Global efforts to address AI security risks
The NCSC’s warning follows the inaugural AI Safety Summit held in the UK, which led to The Bletchley Declaration—a global initiative to manage AI’s risks. As part of this effort, major AI developers have committed to sharing code with governments to ensure responsible AI development.
While the AI testing plan is a step in the right direction, it is not legally binding and lacks the backing of certain nations. This raises questions about its effectiveness in addressing the growing threat of AI-driven cyberattacks.
The NCSC’s warning underscores the evolving threat landscape in cyberspace, where AI-enhanced malware is on the horizon. With the potential to bypass current security measures, AI poses challenges to cybersecurity practitioners, governments, and organizations alike. As the world grapples with the implications of AI in cyber threats, vigilance and international cooperation will be essential to staying ahead of emerging threats.