In a significant move toward regulating artificial intelligence (AI) across Europe, the European Union member states have unanimously voted to approve the final text of the EU’s AI Act. This development was confirmed by Thierry Breton, the Commissioner for the Internal Market of the EU, who hailed the Act as a historic and unprecedented step in the global landscape of AI regulation.
The AI Act introduces a risk-based strategy for the oversight of AI applications, focusing on high-risk areas such as governmental use of AI in biometric surveillance, regulating AI systems, including those similar to ChatGPT, and establishing transparency rules before these technologies can enter the market. This move comes after a political agreement was reached in December 2023, with subsequent efforts to finalize the text for legislative approval.
Regulatory framework and its implications
The agreement marks the culmination of negotiations, with the “coreper” vote of the permanent representatives of all EU member states taking place on February 2. This step paves the way for the Act to proceed through the legislative process, including a vote by a crucial EU lawmaker committee scheduled for February 13, followed by a vote in the European Parliament expected to occur in March or April.
The AI Act’s approach is based on the principle that the riskier the AI application, the greater the responsibility of the developers. This is particularly relevant for AI systems in critical areas such as job recruitment and educational admissions. Margrethe Vestager, Executive Vice President of the European Commission for A Europe Fit for the Digital Age, emphasized that the focus is on high-risk cases to ensure that the development and deployment of AI technologies align with the EU’s values and standards.
European commission boosts AI with new initiatives
The AI Act is anticipated to be applied in 2026, with certain provisions taking effect earlier to integrate the new regulatory framework gradually. In addition to establishing the regulatory groundwork, the European Commission is also taking proactive steps to support the AI ecosystem within the EU. This includes establishing an AI Office tasked with monitoring compliance with the Act, particularly for high-impact foundational models that pose systemic risks.
Furthermore, the Commission has announced initiatives to bolster local AI developers, such as upgrading the EU’s supercomputer network. This aims to enhance generative AI model training capabilities, ensuring that Europe remains at the forefront of AI innovation while adhering to ethical and regulatory standards.
This legislative milestone reflects the EU’s commitment to leading the way in AI technologies’ responsible and ethical development. By setting a global precedent, the EU aims to balance innovation with safeguarding fundamental rights and societal values.