In a decisive move towards ensuring the safety of artificial intelligence (AI) systems, the Biden administration has initiated a requirement for developers of major AI systems to disclose their safety test results to the government. This development is part of the comprehensive approach outlined in an executive order signed by U.S. President Joe Biden three months ago, aimed at effectively managing the rapidly evolving landscape of AI technology.
White house AI council reviews progress
The White House AI Council is set to convene on Monday to assess the progress made in the implementation of the executive order. The 90-day goals outlined in the order included a mandate under the Defense Production Act, compelling AI companies to share crucial information, particularly safety test results, with the Commerce Department. Ben Buchanan, the White House special adviser on AI, emphasized the government’s intent to ensure the safety of AI systems before their release to the public.
Safety test disclosure mandate
Under the executive order, AI companies are required to disclose safety test results, marking a significant step towards establishing transparency and accountability in the AI industry. The disclosure will involve sharing vital information with the Commerce Department, and the government is resolute in its commitment to ensuring that AI systems meet rigorous safety standards.
While software companies have committed to a set of categories for safety tests, there is currently no common standard for these tests. The National Institute of Standards and Technology, as outlined in the executive order, will play a pivotal role in developing a uniform framework for assessing safety. This standardized approach aims to bring consistency and reliability to safety assessments across the AI landscape.
AI’s Impact on economy and national security
AI has emerged as a paramount consideration for both economic prosperity and national security, prompting the federal government to take proactive measures. The launch of new AI tools, such as ChatGPT, capable of generating text, images, and sounds, has introduced new dimensions of investment and uncertainty. The Biden administration is not only focused on domestic regulations but is also actively engaging with international partners, including the European Union, to establish global standards for managing AI technology.
Commerce department’s draft rule on U.S. cloud companies
In line with the government’s efforts, the Commerce Department has developed a draft rule specifically addressing U.S. cloud companies that provide servers to foreign AI developers. This rule aims to ensure that AI development using U.S.-based cloud infrastructure adheres to stringent safety and regulatory standards, further bolstering the government’s commitment to overseeing AI applications both domestically and globally.
Nine federal agencies, including the Departments of Defense, Transportation, Treasury, and Health and Human Services, have completed risk assessments concerning AI’s integration into critical national infrastructure. These assessments are crucial in identifying potential vulnerabilities and ensuring that AI applications are deployed in a manner that safeguards national interests.