The European Union (EU) recently witnessed a contentious debate among its member states, the European Commission, and the European Parliament regarding the landmark Artificial Intelligence (AI) Act. This pivotal legislation, which aims to regulate the deployment of AI technologies within the EU, has drawn both praise and criticism.
Mher Hakobyan, an Advocacy Advisor on Artificial Intelligence, expressed disappointment over what he perceives as a missed opportunity to safeguard human rights and privacy.
One of the central points of contention within the AI Act is the regulation of facial recognition technology. Advocates for stringent AI regulation had initially called for an unconditional ban on live facial recognition. However, the final version of the legislation falls short of this demand, allowing for limited use of facial recognition technology under certain safeguards.
Mher Hakobyan contends that this compromise is a significant disappointment, as he argues that no safeguards can effectively prevent the potential human rights violations associated with facial recognition technology. He asserts that the failure to impose a complete ban on facial recognition technology represents a missed opportunity to protect human rights, civic space, and the rule of law in the EU, all of which he believes are already under threat.
Export of harmful AI technologies
Another contentious aspect of the AI Act is the regulation of AI technology exports. Some critics argue that the legislation does not go far enough in preventing the export of AI technologies that could be used for harmful purposes, such as social scoring. The AI Act allows European companies to export AI technologies that may be deemed illegal within the EU, creating what Hakobyan sees as a dangerous double standard.
Hakobyan suggests that permitting European companies to profit from technologies recognized as impermissibly harmful to human rights in their home states is concerning. He believes that stronger measures should have been taken to ensure that AI technologies developed within the EU are not used to violate human rights elsewhere in the world.
Despite the contentious nature of the debate, the provisional high-level political deal on the AI Act has been reached. However, before it can become law, intensive technical meetings are set to take place to finalize the text of the legislation.
Amnesty international’s call for stronger protections
Amnesty International, along with a coalition of civil society organizations led by the European Digital Rights Network (EDRi), has been advocating for robust AI regulation in the EU that prioritizes and protects human rights. Their call for a comprehensive ban on the use, development, production, sale, and export of facial recognition technology for identification purposes has been a focal point of their advocacy efforts.
The AI Act’s outcome reflects the challenges of finding a balance between fostering innovation and protecting fundamental rights in the rapidly evolving field of artificial intelligence. The decision to allow limited use of facial recognition technology under safeguards acknowledges the potential benefits of AI while attempting to mitigate its risks. However, critics like Mher Hakobyan argue that these safeguards are insufficient and that an outright ban on certain AI applications is necessary to prevent human rights abuses.
The debate surrounding the AI Act also highlights the complexities of regulating AI technologies on a global scale. While the EU is taking steps to regulate AI within its borders, the export of AI technologies remains a contentious issue. Striking a balance between supporting European innovation and preventing the misuse of AI tools abroad is a challenge that policymakers continue to grapple with.