In a bid to provide valuable resources for U.S. policymakers, a committee of MIT leaders and scholars has released a series of policy briefs outlining a comprehensive framework for the governance of artificial intelligence (AI). The overarching goal of these papers is to bolster U.S. leadership in the rapidly evolving field of AI while simultaneously mitigating potential harm that may arise from the technology. This effort is driven by a desire to encourage the responsible deployment of AI for the benefit of society.
A practical approach to AI oversight
The main policy paper, titled proposes an approach to regulating AI that leverages existing U.S. government entities responsible for overseeing relevant domains. This approach acknowledges that many AI applications can be regulated by the same agencies that currently govern similar human activities. The key emphasis is on aligning regulations with the intended purpose and function of AI tools.
Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, highlighted the importance of building upon existing regulatory foundations: “As a country, we’re already regulating a lot of relatively high-risk things and providing governance there…We’re not saying that’s sufficient, but let’s start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach.”
Defining purpose and intent
One essential aspect of this framework is the need for AI providers to clearly define the purpose and intent of their AI applications in advance. This step would facilitate the identification of appropriate regulations and regulators for each AI tool. Whether AI is used in healthcare or other sectors, it should be held accountable to the same legal standards as its human counterparts.
Moreover, the framework acknowledges that AI systems often exist in multi-layered “stacks” that work together to deliver specific services. While the primary responsibility for issues related to a specific service falls on the provider of that service, there is recognition that builders of general-purpose AI tools should share accountability if their technologies contribute to specific problems.
Auditing and self-regulation
To enhance oversight and accountability, the policy framework suggests advancing the auditing of new AI tools. This could involve various approaches, including government-initiated audits, user-driven audits, or audits resulting from legal liability proceedings. Public standards for auditing, whether established by a nonprofit entity or a federal organization, would be crucial in this regard.
Furthermore, the framework explores the possibility of creating a new government-approved “self-regulatory organization” (SRO) akin to the Financial Industry Regulatory Authority (FINRA). Such an agency focused on AI could accumulate domain-specific knowledge and adapt to the rapidly changing AI landscape while maintaining government oversight.
Addressing complex legal matters
Beyond these general principles, the framework recognizes several specific legal challenges in the realm of AI. Copyright and intellectual property issues related to AI are already subjects of litigation and require careful consideration. Additionally, the framework acknowledges the existence of “human plus” legal issues, where AI capabilities surpass those of humans. These issues encompass topics such as mass surveillance tools and may necessitate special legal considerations.
In addition to regulatory aspects, the policy papers emphasize the importance of promoting research to harness AI’s potential for the benefit of society. For example, one paper explores the idea that AI could augment and aid workers rather than replace them, leading to long-term economic growth shared across society.
This diverse range of analyses reflects the committee’s commitment to addressing AI regulation from a broad interdisciplinary perspective. It underscores the importance of policymakers considering both the technical aspects of AI and its societal implications.
MIT’s role in shaping AI governance
MIT, as a leader in AI research, believes it has a crucial role to play in shaping AI governance. David Goldston, director of the MIT Washington Office, notes, “Since we are among those creating technology that is raising these important issues, we feel an obligation to help address them.” The committee’s objective is not to hinder AI but to advocate for its responsible development and governance.
In the words of Dan Huttenlocher, “Working in service of the nation and the world is something MIT has taken seriously for many, many decades. This is a very important moment for that.”
A unified effort for the future of AI governance
The committee’s release of these policy papers marks a significant step in bridging the gap between AI enthusiasts and those concerned about its consequences. While the committee recognizes the immense potential of AI, it also underscores the necessity for governance and oversight. These experts in the field are sending a clear message: AI can and should be harnessed for the betterment of society, but it must be done responsibly and with robust regulatory frameworks in place.