It’s been months since President Biden issued a landmark Executive Order on Artificial Intelligence. However, the implications of the Order remain a subject of deliberation till this point.
The Executive Order directs federal agencies on several actions to ultimately come down to promoting AI innovations in the United States, rooting out bias in AI systems, addressing national security concerns, and advancing the country’s leadership of the technology around the world.
Key Points of Biden’s AI Executive Order
Overall, the Order is a step in the right direction towards ensuring that America doesn’t lag behind other leading economies as far as AI is concerned. But it’s equally important that the Order is executed rightly to address the risks without killing innovation in the space.
Many experts praised the directives for touching key areas of concern, which include ensuring that AI systems are safe, secure, and trustworthy. The Order also called for actions towards developing guidance for “content authentication and watermarking” to clearly label AI-generated content and prevent deception.
The Order also directed actions towards addressing discrimination in AI algorithms, promoting responsible use of AI in different sectors like healthcare and education, and so many other directives to accelerate and regulate the development and use of AI in the United States.
Can One Size Fit All?
From a critical lens, however, some experts ponder the outcome of these directives in practice, considering the regulatory approach is more like a “one-size-fits-all.” According to Alon Yamin, co-founder and CEO of Copyleaks, that is one gap with the Order.
“I think really focusing a bit more on the different types of solutions for different content types and understanding and distinguishing between them is one point that I thought was missing a little bit,” Yamin told Fox Business. “You can’t have one solution for all.”
The crux of the matter is that Biden’s AI order doesn’t explicitly delve into tailoring solutions or regulations based on content types and risk levels. Not all AI applications or algorithms pose equal threats.
Facial recognition used for law enforcement carries inherent risks of bias and discrimination, while a product recommendation algorithm might raise minor privacy concerns. Categorizing AI systems based on their potential for harm allows for targeted regulations. In this way, we can ensure that high-risk applications undergo rigorous scrutiny and not subject systems in low-risk areas to the same regulation.