In a recent report released by NCC Group, the intersection of artificial intelligence (AI) and cybersecurity has been scrutinized, shedding light on both the promises and limitations of generative AI in the context of code vulnerability detection. The report, titled “Safety, Security, Privacy & Prompts: Cyber Resilience in the Age of Artificial Intelligence,” offers a comprehensive analysis of various AI cybersecurity use cases.
Generative AI and code vulnerabilities
The explosive growth of generative AI technology in late 2022 sparked discussions and concerns regarding its implications for cybersecurity. One focal point of this discourse revolves around the potential security risks posed by generative AI chatbots. These risks range from inadvertent exposure of sensitive business information to the possibility of malicious actors exploiting these advanced self-learning algorithms to bolster their cyberattacks.
One of the primary areas of interest within the report is the feasibility of employing generative AI chatbots for code vulnerability assessment. Can these AI systems, when provided with source code, conduct an interactive form of static analysis and accurately pinpoint security weaknesses? The report’s findings indicate a mixed bag of results. While generative AI shows promise and productivity gains in code and software development, its effectiveness in detecting code vulnerabilities remains variable.
On the flip side, the report highlights a promising application of machine learning (ML) models in the realm of cybersecurity. Specifically, ML models can be instrumental in identifying novel zero-day attacks, enabling an automated response to safeguard users from malicious files. To validate this concept, NCC Group sponsored a master’s student at University College London’s Centre for Doctoral Training in Data Intensive Science (CDT DIS) to develop a classification model for identifying malware.
The results of this endeavor are compelling. The classification model achieved an impressive accuracy rate of 98.9%. This success underscores the potential of ML models in bolstering cybersecurity defenses by swiftly identifying and mitigating emerging threats.
Harnessing threat intelligence with AI
Another critical facet explored in the report revolves around threat intelligence, a pivotal component of proactive cybersecurity. Threat intelligence involves the continuous monitoring of various online data sources, which provide valuable insights into newly identified vulnerabilities, evolving exploits, and emerging trends in attacker behavior. This data, often in the form of unstructured text from forums, social media, and the dark web, can be a goldmine of information.
Machine learning models can play a vital role in processing this data, extracting key cybersecurity nuances, and identifying patterns and trends in attacker tactics, techniques, and procedures (TTP). Armed with these insights, defenders can take a proactive and pre-emptive approach to implement additional monitoring or control systems, especially when new threats pose a significant risk to their business or technology landscape.
In an era marked by rapid technological advancement, the role of AI in cybersecurity continues to evolve. While generative AI chatbots offer promise in certain aspects of code development, their reliability in detecting vulnerabilities remains a matter of ongoing exploration. On the other hand, ML models demonstrate a high level of potential in identifying and combating novel cyber threats, making them a valuable asset in the defender’s arsenal.
It is crucial to acknowledge that AI in cybersecurity is not a one-size-fits-all solution. Rather, it complements human expertise and vigilance. The partnership between human cybersecurity professionals and AI systems is essential to strike a balance between harnessing the power of automation and maintaining the critical human oversight needed to navigate the ever-evolving threat landscape.