The recent AI summit led by UK Chancellor Rishi Sunak brought global attention to the United Kingdom’s ambitions in artificial intelligence. While the summit emphasized the importance of safeguarding against future AI risks, it failed to address the pressing issue of AI-generated harms already affecting society.
The overlooked present harms
Law enforcement agencies and private companies have increasingly deployed facial recognition surveillance systems despite their alarming inaccuracy rates. These systems, underpinned by flawed algorithms, have disproportionately discriminated against women and people of color. Inaccurate facial recognition technology exacerbates tensions and distrust between the police and the public, as witnessed in the aftermath of events like the Black Lives Matter protests and the Sarah Everard vigil.
Furthermore, the government has integrated AI deeply into public service models, with hidden algorithms making critical decisions about key aspects of public life. The A-level grading fiasco is a stark reminder of the havoc that faulty algorithms can wreak, leaving countless students with arbitrary outcomes. The Department for Work and Pensions has also been scrutinized for using secretive, invasive, and discriminatory algorithms that impact people’s access to housing, benefits, and council tax support. These algorithms automate bias and disadvantage Britain’s poor, highlighting the real-world consequences of unchecked AI in public services.
Future AI regulation and the data protection bill
While discussions about regulating AI in the future are crucial, it is equally essential to address current problems and mitigate the harms caused by existing AI systems. Human rights and data protection laws remain the most effective safeguards against AI threats. However, the UK government is pushing a Data Protection Bill through Parliament that threatens to undermine these fundamental protections.
The Data Protection Bill deviates from European AI regulation trends and slashes legal safeguards, leaving the public exposed to the discriminatory impact of AI. It threatens to normalize mass automated decision-making, increasing the likelihood of decisions about individuals based solely on binary predictions without human involvement, empathy, or dignity. Vulnerable and marginalized communities stand to be disproportionately affected by these developments, as their privacy and equality are put at risk.
The need for action
The UK government faces a critical choice regarding its AI policies. While high-level summit pledges emphasize the importance of future AI protections, the government must prioritize addressing present AI threats. Mere performative gestures will not suffice; the legal safeguards must be protected and reinforced.
The UK’s AI summit led by Rishi Sunak has brought global attention to the country’s ambitions in the AI field. While the government’s focus on future AI risks is essential, it must not ignore the real-world harms generated by existing AI systems. Facial recognition inaccuracies and hidden government algorithms are already impacting society, leading to discrimination, distrust, and arbitrary outcomes. Additionally, the proposed Data Protection Bill threatens to undermine human rights and data protection laws, further exposing the public to AI-related risks. The government must prioritize action over rhetoric, safeguarding the legal protections already in place and addressing the pressing challenges posed by AI in the present. In pursuing global AI leadership, the UK should not forget the immediate battles it faces to guard against automated threats and protect its citizens.