In anticipation of the inaugural global summit on AI safety hosted by the UK on November 1 and 2, a coalition of top experts has made a groundbreaking move by urging the establishment of an international AI safety treaty. This collective effort, led by prominent figures such as Yoshua Bengio and Yi Zeng, aims to address the challenges posed by artificial intelligence and ensure a coordinated global response to its development and deployment.
A unified call for global AI safety measures
As leaders convene for the AI Safety Summit, the call for an international AI safety treaty echoes loudly across borders. Signatories, ranging from Turing Award winners to industry leaders, emphasize the need for comprehensive measures to safeguard against the potential risks of advanced AI systems.
The proposed treaty emphasizes three key components: global compute thresholds, a CERN-like collaboration for AI safety, and a compliance commission akin to the International Atomic Energy Agency (IAEA). The experts stress the urgency of preventing the unbridled advancement of AI capabilities, highlighting the need for stringent safety measures and ethical considerations in the development of Frontier AI systems.
Signatories, including Eleanor ‘Nell’ Watson and Bart Selman, express their support, underlining the importance of international cooperation in governing the excesses of AI development. The aim is not only to mitigate catastrophic risks but also to ensure the equitable distribution of AI benefits for the greater good.
Unveiling the voices behind the call
The initiative in question, orchestrated by the visionary Tolga Bilge, stands as a testament to unity, bringing together a heterogeneous assemblage of influential figures within the expansive domain of artificial intelligence. The roster includes venerable pioneers of AI such as Yoshua Bengio, alongside authorities like Yi Zeng, who have lent their expertise to enlighten the UN Security Council on the perils associated with AI. This coalition exhibits a sweeping spectrum of representation, encompassing luminaries from the realms of business, policy, academia, and industry.
Among the distinguished signatories, luminaries like Gary Marcus and Victoria Krakovna ardently underscore the imperativeness of tangible measures, advocating fervently for the instigation of an expansive collaborative endeavor—an analog to a CERN dedicated to the realm of AI safety.
These erudite experts articulate a sense of urgency, emphasizing the critical need for a global consensus and concerted actions that propel the world towards a future where AI is wielded safely. Geoffrey Odlum, in particular, draws insightful parallels to the triumphs of international treaties in disparate domains, imploring the political magnates of major technological powers to muster the necessary willpower to undertake this pressing diplomatic imperative without delay.
The urgency of an international AI safety treaty
As the impending global AI Safety Summit draws near, the resounding inquiry permeating the discourse pertains to the preparedness of the international community to earnestly partake in deliberations concerning the formulation of an AI treaty. Will sovereign entities, spanning from formidable technological powerhouses such as the United States to the eminent force that is China, transcend their disparities to cooperatively and constructively contribute to the establishment of a formidable framework of international regulations?
The plea for an AI safety treaty, one that can be signed by all concerned, transcends the realms of mere technical exigency; it stands as a geopolitical imperative, mandating a collective mobilization and a shared dedication to the judicious and responsible advancement of artificial intelligence.