In an era where artificial intelligence (AI) is rapidly reshaping the technological landscape, the White House has taken a significant step. On October 30, 2023, President Joe Biden issued an executive order focusing on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This directive aligns with global efforts, including the G-7’s support for the “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems” and the upcoming UK Summit on AI Safety.
CISOs at the forefront of AI integration and security
Chief Information Security Officers (CISOs) are at the forefront of this new era. The executive order not only highlights the potential of AI but also underscores the dual-use nature of these technologies, which can be exploited for malicious purposes. CISOs must now navigate a landscape where AI’s benefits are balanced against its risks, especially concerning national security and intellectual property.
The order outlines seven key areas for focus, including ensuring safety and security, protecting privacy, advancing equity and civil rights, and promoting innovation. These guidelines are not just for industry leaders but also for government agencies, with a special emphasis on those with a national security footprint.
Global initiatives and compliance challenges
The global nature of AI development and its regulation presents a unique challenge. With different countries and regions potentially adopting varying guidelines and regulations, CISOs must stay informed and adaptable to ensure compliance. The lack of harmonisation across borders could lead to complex compliance scenarios, necessitating a proactive and informed approach from cybersecurity leaders.
Government agencies leading AI regulation
The National Institute of Standards and Technology (NIST) is tasked with developing guidelines and best practices for AI, focusing on safety, security, and trustworthiness. Similarly, the Department of Homeland Security (DHS) has outlined its role in forming the AI Safety and Security Advisory Board (AISSB) and developing AI safety and security guidance for critical infrastructure.
The Cybersecurity and Infrastructure Security Agency (CISA) is set to leverage AI and machine learning tools for threat detection and prevention, highlighting the government’s commitment to using AI to enhance cybersecurity defences.
Protecting intellectual property in the AI era
The executive order also addresses the significant threat AI poses to intellectual property. Through the National Intellectual Property Rights Coordination Center, DHS is creating a program to help AI developers mitigate AI-related risks. This initiative will involve partnerships with law enforcement and industry, underlining the collaborative approach needed to safeguard intellectual property in the AI age.
Industry’s role and the path forward
Industry leaders like IBM emphasise the importance of open innovation in addressing AI safety concerns. A diverse ecosystem involving creators, developers, and academics is crucial to advancing AI safety science and fostering market competition.
As AI continues to be integrated into various products and services, CISOs are urged to assess these tools rigorously. They must demand provenance and demonstrable test results to ensure that the AI and ML tools they adopt are secure and reliable.
The White House’s executive order on AI marks a new chapter in cybersecurity leadership. CISOs must now adapt to a rapidly evolving landscape where AI’s potential is matched by its risks. Staying informed, ensuring compliance across different jurisdictions, and collaborating with government and industry are key to navigating this new era effectively.