In the year 2023, a peculiar trend has gripped political elites on both sides of the Atlantic: an unfounded fear of a fictional god-like artificial intelligence, or artificial general intelligence (AGI). This baseless panic has manifested in policy summits, new regulations, and apocalyptic warnings about a technology that is yet to exist. The absurdity of this situation is underscored by the fact that while politicians indulge in AI doomsday scenarios, real-world issues like crumbling infrastructure, healthcare challenges, and energy shortages persist unaddressed.
The politicization of AI in a bizarre flight of fancy
In March, a member of the House of Commons Science and Technology Committee, Tory MP Katherine Fletcher, raised eyebrows with a bizarre speculation about a sentient computer deciding to exterminate all cows on the planet. This fantastical notion of a self-replicating and invincible AI became a focal point of discussions, with politicians demanding answers from tech representatives. The fear, however, seemed detached from any credible scientific basis.
The elite cult of AI in TESCREAL and effective altruism
The panic around killer AI is described as a “collaborative metafiction,” akin to QAnon for the elite. This narrative, labeled ‘TESCREAL’ by philosophers Émile P. Torres and Timnit Gebru, encompasses various techno-utopian beliefs, including transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and long-termism. These subcultures gained prominence and respectability due to significant financial support from Silicon Valley billionaires.
The effective altruism (EA) movement, in particular, has played a pivotal role in fueling the hype around AI. With an influx of approximately $500 million from wealthy EA supporters, AI has become an obsession within the movement. While some EAs view AI as a potential solution to global issues, others express deep concern about its existential risks, creating a dichotomy within both policy and tech circles.
The impact on policymaking of a surreal shift in priorities
The most astonishing revelation of 2023 isn’t groundbreaking advances in artificial intelligence; instead, it’s the totalizing effect that AI mythologies have had on media and policy elites. Policymakers, originally dismissive of AI’s existential threat in 2021, have now succumbed to the influence of these techno-utopian beliefs. The fear of Terminator AI has become the dominant narrative, leading to the hosting of global AI safety summits and the prioritization of an imaginary crisis over tangible issues.
Wealthy Silicon Valley billionaires, such as Facebook co-founder Dustin Moskovitz and convicted crypto-fraudster Sam Bankman-Fried, have played a significant role in financing organizations like Oxford University’s Future of Humanity Institute. This financial backing has granted respectability to once-obscure subcultures, contributing to the distortion of reality within political and academic circles.
The elite cult’s grip on Westminster
In the closing months of 2023, it’s evident that politicians have unwittingly become actors in a drama scripted by followers of esoteric fringe ideologies. The fear of a god-like AGI, propagated by the TESCREAL and effective altruism movements, has taken precedence over pragmatic policymaking. Policymakers, swept up in this elite cult, have relinquished the right to be taken seriously as AI mythologies overshadow real-world challenges. The consequences of this delusion extend beyond political posturing, shaping the trajectory of policies and public discourse in ways that may not align with the actual state of AI technology.
By allowing themselves to be captivated by these narratives, our political class risks becoming detached from the pressing issues of our time. As the year concludes, the need for critical evaluation and a return to evidence-based policymaking becomes more urgent than ever. The influence of AI mythologists must be questioned, and the political elite must refocus on addressing tangible challenges rather than succumbing to the allure of fantastical AI doomsday scenarios.