In a move that underscores growing transatlantic tensions over artificial intelligence (AI) regulations, the U.S. State Department has expressed reservations about the European Union’s (EU) AI Act. According to undisclosed documents obtained by Bloomberg, the U.S. government analysis contends that the AI Act, approved by the European Parliament in June, contains terms that are “vague or undefined,” raising concerns about its potential impact on AI investment and the competitive landscape.
The AI Act, set to take effect in 2025, seeks to regulate high-risk AI systems, including but not limited to facial recognition software. It mandates that companies developing AI, akin to ChatGPT, disclose more information about the data used to train their systems. The EU’s three branches of power—the Commission, Parliament, and Council—need to reach a consensus on the final version of the law.
U.S. flags concerns over EU’s approved AI laws
Focusing specifically on the version approved by the European Parliament, the U.S. analysis posits that the AI regulations may disproportionately favor large tech companies capable of financing extensive machine learning system training. Smaller firms, lacking the financial resources, may face hurdles in compliance, potentially leading to losses.
The U.S. State Department’s analysis suggests that certain provisions in the EU’s AI Act might have the effect of hindering the anticipated increase in productivity and could potentially result in the migration of jobs and investment to other markets. This concern is rooted in the belief that the regulations could curtail investment in AI research and development (R&D) and commercialization within the EU, limiting the competitiveness of European firms.
Of particular note is the discrepancy in approach between the EU and the U.S. While the EU’s focus is on the development of AI models, the U.S. is reportedly more interested in regulating how these models are utilized. The absence of AI regulations in the U.S. has not dissuaded its critique of the EU’s approach.
A history of concerns and contrasts
This is not the first time the U.S. has voiced concerns over the EU’s AI law. Initial objections were raised when the EU Commission first proposed the AI Act in 2021. In May, U.S. Secretary of State Antony Blinken opposed several of the EU Parliament’s proposals during discussions in Sweden.
The analysis, including a detailed review of certain provisions in the law, was reportedly shared with European counterparts in recent weeks. The U.S. State Department, however, refrains from commenting on the leaked information, declaring that they want to use their relationship with the EU to achieve digital solidarity on important bilateral matters.
The European Parliament, on the other hand, has portrayed the AI Act as a commitment to the responsible development of AI with a “balanced and human-centered approach.” The legislation proposes a risk-based categorization of AI systems, ranging from the lowest-risk category involving AI in video games or spam filters to the highest-risk category, which includes AI for social scoring—a practice assigning scores to individuals based on behavior for purposes like loans or housing.
Companies dealing with high-risk AI would be obligated to provide detailed information about how their systems operate. This transparency requirement aims to ensure fairness and prevent discrimination, aligning with the EU’s emphasis on a responsible and ethical approach to AI development.
The AI Act emerges against a backdrop of increasing warnings from experts about the potential threats posed by rapid AI development. As the EU endeavors to establish a regulatory framework, the U.S. continues to scrutinize its approach, emphasizing the need for a delicate balance between fostering innovation and addressing potential risks in the AI landscape.