As artificial intelligence (AI) continues to permeate various sectors, the public sector is not exempt from its influence. The Ada Lovelace Institute, a leading think tank, issued a policy briefing in October 2023, urging public sector buyers to exercise caution and conduct thorough due diligence before deploying AI foundation models. These models, characterized by their versatility and wide range of potential applications, including large language models (LLMs), have the potential to transform public service delivery. However, the institute warns of various risks associated with their adoption, such as bias, discrimination, privacy breaches, and over-reliance on the private sector.
Understanding AI foundation models
Foundation models are a type of AI system designed to handle a multitude of tasks and use cases, unlike narrow AI systems tailored for specific functions. Large language models (LLMs), a subset of foundation models, are known for their scale and adaptability, making them suitable for diverse applications, from translation to document analysis.
Promised benefits and unproven speculations
The potential applications of AI foundation models in the public sector are vast, ranging from document analysis and decision support to enhancing customer service. Proponents argue that these models can lead to greater efficiency, personalized government communications, and improved knowledge management within government entities. However, the Ada Lovelace Institute cautions that these benefits are still unproven and speculative, emphasizing the need for careful evaluation before implementation.
The risks of blind adoption
One significant concern raised by the institute is the risk of public sector organizations adopting foundation models merely because they are new technology, rather than because they are the most effective solution to a problem. This “technology for the sake of technology” approach can lead to costly and inefficient deployments.
The role of the private sector
Private technology providers have played a leading role in developing AI foundation models. However, the institute warns that an overreliance on these providers can create a misalignment between the technology’s capabilities and the specific needs of the public sector, which often deals with sensitive information on a larger scale.
Addressing bias and trust issues
Automation bias, where users trust AI model outputs too unquestioningly, is another risk highlighted by the institute. To mitigate this, they emphasize the importance of addressing issues like bias and privacy violations during the training and development stages of these models.
Guidance for prudent procurement and governance
To ensure the safe and effective deployment of AI foundation models, the Ada Lovelace Institute offers several recommendations:
Detailed risk assessment: Public sector bodies should require detailed information about associated risks and mitigations upfront when procuring and implementing AI foundation models. This includes addressing issues like bias and privacy violations in text outputs.
Regular policy review: Policymakers should regularly review and update guidance on AI foundation models. Procurement requirements should uphold standards, and these standards should be incorporated into tenders and contracts.
Local data hosting: Data used by these models should be held locally, reducing the risk of privacy breaches through data sharing with private providers.
Third-party audits: Public sector organizations should mandate independent third-party audits of AI systems to ensure compliance with safety and ethical standards.
Pilot programs: Before widespread deployment, pilot limited use cases to identify and address all risks and challenges.
The Nolan principles of public life
Effective governance of AI, according to the institute, should be underpinned by the Nolan Principles of Public Life, emphasizing accountability and openness.
Concerns about deregulation
In July 2023, the Ada Lovelace Institute expressed concerns about the UK government’s “deregulatory” data reform proposals, which could potentially undermine the safe development and deployment of AI. The report indicated that a lack of clear regulation and oversight in various sectors could leave individuals with inadequate protection and recourse in the event of AI-related harm.
House of Lords inquiry
In September 2023, the House of Lords launched an inquiry into the risks and opportunities presented by large language models (LLMs). Dan McQuillan, a lecturer in creative and social computing, cautioned against relying on LLMs to solve structural problems in the economy and public services, highlighting the potential for displacement of existing systems with uncertain long-term consequences.
The Ada Lovelace Institute’s policy briefing serves as a crucial reminder for public sector buyers of AI technology to exercise prudence and diligence in their procurement and deployment processes. While AI foundation models offer the promise of transformative change, their adoption should be accompanied by thorough risk assessment, adherence to ethical standards, and a clear commitment to accountability and transparency. In the ever-evolving landscape of AI, informed decision-making is paramount to ensure that the benefits outweigh the risks for the public sector and society as a whole.