The US Cybersecurity and Infrastructure Security Agency (CISA) has underscored the imperative of incorporating security into the very fabric of artificial intelligence (AI) systems during their developmental stages. This proactive approach, as advocated by CISA, aims to prevent vulnerabilities from arising post-deployment, a scenario that can be not only intricate but also exorbitant to rectify. The agency’s call for secure AI design is aligned with its broader campaign to integrate security seamlessly into the design and development phases, rather than considering it as an afterthought.
Security at the core: CISA’s ongoing mission
In its recent post, the CISA has strongly urged AI developers to adopt a “security by design” paradigm, reinforcing the principle that AI, at its core, is a form of software. The agency has sought to demystify notions around AI by emphasizing its fundamentally software-based nature, making it subject to the same security requirements that any software demands. This echoes a longstanding global push by cybersecurity experts to treat security as an inherent component rather than a peripheral addition.
Shift in perspective: from externality to commitment
CISA Director Jen Easterly, in a speech earlier this year, underscored the need for a transformative shift in the perception of security within the tech industry. Historically, security has often been relegated to a secondary status, with its costs externalized to end-users. Easterly called for a complete overhaul of this approach, advocating for a renewed commitment to security that involves embedding it into the software development lifecycle. Furthermore, Easterly suggested a change in liability distribution, proposing that software developers shoulder a greater share of the responsibility.
Machine learning’s interconnected challenges
CISA’s blog post draws attention to the intricacies of machine learning, emphasizing the tightly woven nature of its code. Notably, any modification to a single input within an AI system can have far-reaching consequences. This intricate interdependence was underscored in a seminal 2014 paper by Google researchers, who aptly likened unresolved technical challenges in AI development to “the high-interest credit card of technical debt.”
Tailored security practices for AI
While the blog post acknowledges the necessity of security by design for AI systems, it also recognizes the unique challenges that AI presents. CISA has compiled a list of essential, sector-agnostic security practices that are equally applicable to AI software. The agency emphasizes the importance of adhering to these guidelines, particularly given instances where threat actors exploit known vulnerabilities in non-AI software components.
Holistic approach to AI security
CISA underscores that established community-expected security practices and policies should govern various aspects of AI development, including design, development, deployment, testing, data management, system integration, and vulnerability and incident management. In instances where AI model file formats are processed, robust measures must be in place to thwart attempts at untrusted code execution. Moreover, the usage of memory-safe languages is recommended as an additional safeguard against potential threats.
Accountability and transparency in AI engineering
In addition to technical safeguards, CISA urges the AI engineering community to adopt practices that enhance accountability and transparency. This entails implementing vulnerability identifiers such as CVEs (Common Vulnerabilities and Exposures) and maintaining a comprehensive software bill of materials for both AI models and their dependencies. Privacy concerns are not neglected either, as the agency calls for adherence to fundamental privacy principles by default.
A pivotal moment for AI security
CISA’s impassioned call for AI security by design signifies a pivotal juncture in the realm of AI development. By placing security at the forefront of design and development, the agency advocates for a paradigm shift that could potentially avert significant security breaches in the future. In a world where AI systems are becoming increasingly pervasive, this proactive stance could pave the way for a more secure and resilient digital landscape.