In a recent report, the Government Accountability Office (GAO) has raised concerns over the United States government’s lack of preparedness in adopting artificial intelligence (AI) and establishing comprehensive standards for its use. The report highlights the rapid growth of AI adoption within government agencies but underscores the absence of a clear roadmap and standards, which could pose significant security risks.
Growing AI adoption across government agencies
The GAO’s comprehensive 96-page report represents a significant effort to document the expanding role of AI and machine learning in non-military government agencies. The report reveals that AI is currently being employed in 228 distinct ways across the federal government, with nearly 50% of these applications emerging in the past year alone. The majority of these uses, approximately 70%, are related to scientific research or aimed at enhancing internal agency management.
Notably, while the report lists 71 different AI use cases, only 10 of them were disclosed publicly. Some examples include the Department of Commerce’s use of AI for tracking wildfires, automated wildlife population counting, and NASA’s AI-driven monitoring of global volcanic activity. The Department of Homeland Security also utilizes AI to identify border activities of interest through the analysis of camera and radar data.
However, the report reveals that many AI applications remain classified, with federal agencies disclosing only 70% of the total 1,241 active and planned AI use cases. Over 350 technology uses were deemed sensitive and therefore withheld from public disclosure. This secrecy, particularly evident in agencies like the Department of State, raises concerns about transparency and oversight in AI applications.
GAO expresses concerns over the lack of AI standards and policies
While government agencies are increasingly embracing AI for various applications, the GAO report emphasizes the critical need for standardized policies to guide the responsible acquisition and utilization of AI technology from the private sector. The absence of clear guidelines poses potential risks to national security and the well-being of American citizens.
The report points out that federal agencies are developing policies without consistent guidance, potentially resulting in practices misaligned with best practices. This situation not only hinders the effectiveness of AI implementation but also raises concerns about the consequences for national welfare and security.
A significant issue highlighted by the GAO is the delay in issuing guidelines by the Office of Management and Budget (OMB). The OMB was expected to provide draft guidelines to agencies by September 2021, as mandated by a 2020 federal law regarding AI usage in government. However, these guidelines were only released in November 2023, missing the deadline by over two years. This delay has hindered the development of cohesive AI policies across federal agencies.
AI’s role in Law enforcement and privacy Concerns
In a related matter, the GAO previously expressed concerns in September about the use of AI, specifically facial recognition technology, by law enforcement officials. The report revealed that thousands of facial recognition searches had been conducted without proper training between 2019 and 2022, potentially leading to wrongful arrests due to mistaken identity. This issue underscores the importance of responsible AI usage and the need for clear policies and oversight.
The GAO’s report serves as a critical wake-up call for the US government, highlighting both the rapid expansion of AI adoption within government agencies and the lack of comprehensive standards and policies to govern its use. While AI holds immense potential for improving government operations and services, the absence of clear guidelines poses significant security and privacy risks. The government’s delay in issuing essential guidelines further exacerbates the situation. As AI continues to play an increasingly prominent role in society, addressing these concerns and establishing robust governance frameworks is imperative.