Draft guidance on the use of AI by federal agencies has raised concerns among experts and stakeholders who fear it may hinder the adoption of the technology. The Office of Management and Budget (OMB) released the draft guidance shortly after a White House executive order on AI. While OMB asserts that the guidance aims to limit AI harms through a risk-based approach, many worry it could result in unnecessary bureaucracy and hinder innovation.
Stakeholders express skepticism
Various AI stakeholders and experts, including those from technology industry associations and trade groups, have voiced their concerns regarding the OMB’s draft AI guidance for federal agencies. The draft guidance requires agencies to employ minimum risk-management practices for AI tools, such as real-world performance testing for systems considered “safety-impacting” or “rights-impacting.” This has led to fears that even low-risk AI applications in government could be subject to new processes and requirements.
Risk-averse view on AI
During a recent House Oversight and Accountability subcommittee hearing on AI policy, Ross Nodurft, the executive director of the Alliance for Digital Innovation, expressed his concerns, stating, “I’m very concerned that it’s going to lead to a risk-averse view on AI when we right now need to be embracing the technology.” He emphasized the potential gap between the guidance provided and its practical application at different agencies, which may hinder decision-making on AI adoption.
Ambiguity in definitions
Several technology trade groups have also raised concerns about the ambiguity in OMB’s definitions for AI systems. The Information Technology Industry Council stated that the current definitions categorize almost all applications as high-risk, making it difficult for federal agencies to adopt AI tools. The Software Alliance warned of the unclear thresholds for triggering minimum practices, and the Software and Information Industry Association highlighted the potential for low-risk activities to be labeled as high-risk.
Addressing challenges
Nonprofit organizations like the Center for Democracy and Technology have called on OMB to provide support, such as a cross-agency working group, to help agencies categorize their AI use cases. They emphasize the need for clarity in the guidance and the definition of rights-impacting AI.
Defining AI remains a challenge
One overarching challenge is the lack of a common definition for AI capabilities. The Government Accountability Office noted that even within the government, definitions of AI vary. This lack of consensus can complicate efforts to regulate and guide AI usage effectively.
Balancing innovation and regulation
Experts, including Daniel Ho, a law professor at Stanford University, believe that while the OMB draft memo outlines the opportunities and risks of AI well, it may inadvertently stifle innovation through excessive regulation. Ho suggests that what is needed is the development of technologists within the government who can lead the adoption of responsible AI.
Concerns about overreach
A group of academics and former government officials cautioned against the minimum requirements for all government benefits or services listed in OMB’s definition of rights-impacting AI. They argued that these requirements, combined with the broad scope, could threaten core operations in various government programs and impede modernization efforts. They urged OMB to narrow and clarify the definition of rights-impacting AI and distinguish among types of AI and types of benefit programs.
Tailoring processes to risk
Addressing concerns about one-size-fits-all requirements, Ho pointed out that not all AI applications can be treated the same. He argued that certain programs, like the U.S. Postal Service’s use of AI to read handwritten zip codes on envelopes, may not benefit from allowing individuals to opt out of AI for human review. Such a move could result in significant operational challenges.
The OMB’s draft guidance on AI for federal agencies has sparked concerns among stakeholders who fear that it could hinder AI adoption and innovation due to excessive regulation and ambiguous definitions. While the need to manage AI risks is acknowledged, finding the right balance between regulation and fostering innovation remains a challenge. Clarity in definitions and tailored approaches for different AI applications may be necessary to address these concerns and promote responsible AI adoption.