The race is on. A worldwide pursuit to reign in the boundless frontiers of artificial intelligence (AI) is underway, reflecting our global society’s struggle to grapple with the implications of rapidly progressing technology.
Emerging AI tools like ChatGPT, backed by tech behemoth Microsoft, are finding themselves under scrutiny from national and international governing bodies.
This global endeavor underlines a growing realization of the pressing need to determine rules for the game as AI continues its relentless advance.
A global shifting legal landscape for AI
Australia is looking to fortify its legal stance on AI, inviting input from the nation’s key scientific advisory bodies. Aiming to craft a comprehensive strategy to regulate AI, the government is on the brink of a new era in technology policy.
Meanwhile, in the United Kingdom, the Financial Conduct Authority is taking strides to enhance its understanding of AI, seeking insights from the prestigious Alan Turing Institute and other academic institutions.
The goal is clear: to devise new guidelines that keep pace with the fast-changing AI environment. With the competition regulator beginning to assess AI’s impact on consumers, businesses, and the economy, the UK is laying the groundwork for a robust regulatory infrastructure.
Not to be left behind, China’s cyberspace regulator has announced a plan to manage AI services. A new policy is in the works, requiring companies to submit security assessments before launching new AI products.
The European Union is stepping up, advocating for the AI industry to commit to a voluntary code of conduct that would provide safeguards while laws are being developed. EU tech chief Margrethe Vestager insists that a draft could be ready very soon, with the industry signing up promptly.
Even as the AI Act is making its way through the European Parliament, EU lawmakers are aiming to further tighten the reins, suggesting a ban on facial surveillance. Privacy watchdogs across Europe are joining forces, with a task force being set up to scrutinize AI tools like ChatGPT.
Gearing up for a regulated future
Meanwhile, France and Italy are investigating potential breaches of privacy rules by ChatGPT, reflecting the nations’ vigilance in ensuring that AI adheres to data protection norms.
At the same time, both countries are demonstrating a balanced approach, acknowledging the transformative potential of AI. G7 leaders convened in Hiroshima, Japan, and acknowledged the importance of AI governance.
They agreed to initiate a dialogue about the technology’s regulation, aptly named the “Hiroshima AI process”, and have pledged to report their findings by the end of 2023.
The G7 digital ministers advocated for a “risk-based” regulation of AI, emphasizing the need for practical and effective legislation.
In the United States, efforts to regulate AI are gaining momentum. The Federal Trade Commission has pledged to harness existing laws to counter some of the potential perils of AI, including the enhancement of the power of dominant firms and the intensification of fraud.
With a proposed bill to create a task force to study AI policies and identify threats to privacy, civil liberties, and due process, the U.S. is demonstrating a resolute stand to establish strong AI governance.
In this global race to regulate AI, the final destination remains unclear. With each nation taking strides to address the challenges and opportunities presented by AI, the world watches with keen interest.