The European Union’s landmark AI Act has received a mixed reaction from the region’s tech industry, with some welcoming the legislation’s attempt to regulate the development and use of artificial intelligence, while others have expressed concerns about its potential to stifle innovation.
EU Agrees on AI Act
The Act, which was provisionally agreed upon on December 8th, sets out a risk-based approach for regulating AI, whereby the systems deemed more risky would receive the most stringent regulation.
Per the EU Commission, “minimal risk” AI systems, like recommender systems or spam filters, will receive free-pass regulatory treatment. AI systems considered “high-risk” will be subjected to strict requirements, while those considered “unacceptable risk” will be banned.
The full details of the agreement have yet to be officially released, and so it remains uncertain what systems will be classified as high-risk. While many welcomed the regulatory approach, the tech industry was particularly concerned about the hefty requirements for systems deemed risky.
High-risk AI systems would be required to comply with several rules, including “risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity.”
EU Tech Companies Oppose Stringent Rule for AI System
Many fear that such requirements would bring a heavy burden on developers, potentially leading to the exodus of AI talent and making the EU unattractive for AI projects.
“The new requirements – on top of other sweeping new laws like the Data Act – will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers,” said Cecilia Bonefeld-Dahl, director general of DigitalEurope.
France Digitale, an independent organisation that represents European start-ups and investors, also stated that AI projects or systems classified as high-risk would have to obtain a CE mark, which involves a long and costly process.
“We called for not regulating the technology as such but regulating the uses of the technology. The solution adopted by Europe today amounts to regulating mathematics, which doesn’t make much sense,” France Digitale stated.
The companies that fail to comply with the rules will be fined, per the Commission. The fine could range from €35 million or 7% of global annual turnover for violations of banned AI applications, €15 million or 3% for violations of other obligations, and €7.5 million or 1.5% for supplying incorrect information.