The Biden-Harris Administration continues its commitment to advancing the responsible development of artificial intelligence (AI) with the announcement of voluntary commitments from eight prominent AI companies. Building upon the initial commitments secured in July, these new pledges demonstrate a crucial bridge to government action in ensuring the safety, security, and trustworthiness of AI technology. This news comes as the Administration is diligently working on an Executive Order on AI, emphasizing the protection of Americans’ rights and safety.
A comprehensive approach to responsible AI
Since taking office, President Biden, Vice President Harris, and the entire Administration have been proactive in managing the risks and maximizing the benefits of AI. Recognizing the importance of AI in our rapidly evolving world, the Biden-Harris Administration has collaborated closely with leading AI companies to promote responsible AI development. These voluntary commitments are a key component of the Administration’s multifaceted strategy.
Welcoming new commitments
Today, U.S. Secretary of Commerce Gina Raimondo, White House Chief of Staff Jeff Zients, and senior administration officials gathered with industry leaders to unveil the second round of voluntary commitments. Eight prominent AI companies—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability—have stepped forward to bolster the development of AI technology that is safe, secure, and trustworthy.
Principles that drive responsible AI
These commitments underscore three fundamental principles that must underpin the future of AI:
1. Safety, security, and trust:The companies commit to rigorous internal and external security testing of AI systems before release, collaborating with independent experts to address biosecurity, cybersecurity, and broader societal risks. They will also share vital information with industry peers, governments, civil society, and academia to manage AI risks effectively.
2. Putting security first: Committing to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights, the core of AI systems. The companies will also facilitate third-party discovery and reporting of vulnerabilities in their AI systems, ensuring quick identification and resolution of issues.
3. Earning the public’s trust:The companies will develop technical mechanisms to clearly indicate when content is AI-generated, enhancing creativity while reducing the potential for fraud and deception. They will publicly report on their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, addressing both security and societal risks. Furthermore, they will prioritize research to mitigate harmful bias, discrimination, and privacy concerns while leveraging AI to tackle critical societal challenges.
Global collaboration for responsible AI
The Biden-Harris Administration has not only engaged with industry leaders but also consulted with various countries and international partners to develop these commitments. Nations including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK have contributed to this collective effort. These commitments align with initiatives like Japan’s leadership of the G-7 Hiroshima Process, the UK’s Summit on AI Safety, and India’s leadership as Chair of the Global Partnership on AI, fostering international collaboration in the responsible development of AI.
A broader commitment to AI responsibility
Today’s announcement is part of a broader commitment by the Biden-Harris Administration to ensure the safe and responsible development of AI. Other initiatives include:
- AI cyber challenge: In August, the Administration launched a two-year competition called the “AI Cyber Challenge” to use AI to protect critical software that powers the internet and essential infrastructure.
- Consumer protection and civil rights: In July, Vice President Harris convened leaders from various sectors to discuss AI risks and reaffirm the Administration’s commitment to protecting the American public from harm and discrimination.
- Engagement with experts: President Biden met with AI experts in San Francisco in June to gather insights on seizing opportunities and managing AI risks.
- CEO engagement: In May, the President and Vice President met with CEOs of AI innovation companies, emphasizing the importance of responsible, trustworthy, and ethical innovation with safeguards.
- Blueprint for an AI bill of rights: The Administration published a landmark blueprint to safeguard Americans’ rights and safety.
- Federal agency guidance: The Office of Management and Budget is set to release draft policy guidance for federal agencies, ensuring AI systems prioritize safeguarding the American people’s rights and safety.
- Investment in AI research: The National Science Foundation announced a $140 million investment to establish seven new National AI Research Institutes, furthering responsible AI research and development.
- National AI R&D strategic plan: The Administration released a strategic plan to advance responsible AI.
A safer and more responsible AI future
With these commitments, the Biden-Harris Administration continues its dedicated effort to foster the responsible development of AI. The voluntary pledges from leading AI companies not only represent a significant step toward building safer AI but also reinforce the Administration’s commitment to safeguarding the rights and safety of all Americans. As AI innovation accelerates, the Administration remains resolute in its mission to protect the public and advance responsible AI technology.