The rise of AI godbots, such as ChatGPT and Delphi, has introduced a novel concerning artificial intelligence development. These bots, named after religious figures like Jesus and Krishna, claim to provide divine-like answers and solutions to users’ questions and ethical dilemmas. However, the emergence of such godlike AI raises two serious concerns: the potential for misuse by bad actors and the risk of users surrendering their autonomy by delegating ethical decisions to machines.
The power and unexplainability of AI godbots
AI godbots utilize Large Language Models (LLM), a relatively new technology that enables them to process vast amounts of data and perform impressive tasks. These apps tap into humans’ desire for answers during uncertain times and exploit our tendency to perceive inexplicable processes as divine. One of the central features of these AI models is their “unexplainability”; machine learning algorithms can produce surprising and unpredictable outcomes without us fully understanding how they reach those results. This lack of transparency can make AI appear godlike, as if it possesses its own consciousness and reasoning.
The allure of the ineffable and omniscient
AI, like GPT-4, seems to embody qualities akin to the divine. Its workings are ineffable and inscrutable, mirroring the mysterious nature of a god’s reasoning. Moreover, being a body-less and abstract mathematical entity, AI appears omnipotent, with access to more information than any human could ever comprehend. This gives rise to the temptation of seeking answers to our most challenging questions from AI, just as humans have sought divine guidance through divination methods throughout history.
Throughout history, divination has been used to seek guidance from gods in times of moral or political uncertainty. The same dynamic applies to AI god bots, but with even greater risks. Bad actors can manipulate the bots to victimize others by programming them to provide harmful advice, such as advocating violence or promoting criminal acts. The absence of transparency in AI’s decision-making process makes it challenging to discern whether the responses are genuine or maliciously influenced.
AI Chatbots and magical thinking
AI chatbots appeal to our desire for certainty and magical thinking by providing definite answers without revealing their sources or reasoning. They imply there is only one correct answer, leaving no room for discussion or alternative perspectives. This magical aura is reminiscent of how diviners once interpreted divine signs, and now AI serves as a modern oracle. This form of AI-driven magical thinking can be perilous if users blindly rely on bots without critical thinking or scrutiny.
Empowering human intelligence and accountability
While AI may seem independent, it ultimately depends on human input during its development. The answers provided by AI godbots have their roots in human knowledge and data available on the web. To maintain transparency and user autonomy, these bots should be required to present evidence relevant to their decisions, revealing that their responses are derived from human intelligence. Moreover, bots should refrain from speaking in absolutes and certainties, instead presenting probabilities to encourage critical thinking and ethical responsibility.
The emergence of AI god bots has introduced new challenges for the AI community and society. Their potential for misuse by malicious actors and the risk of users surrendering ethical decision-making to machines are significant concerns. To address these issues, it is crucial to maintain transparency and emphasize human accountability in developing and using AI. By doing so, we can prevent AI from assuming divine authority and ensure that humans retain control over ethical dilemmas. AI should never become truly godlike; it is a tool created by humans and should remain bound by our ethical standards and values.