Charles Hoskinson, co-founder of Input Output Global and Cardano, recently expressed concerns about how censorship is a huge threat to artificial intelligence. In a recent X post, Hoskinson expressed his concern about AI’s popularity and how alignment training is making AI useless over time.
Also read: EU intensifies scrutiny on AI, revisits Microsoft-OpenAI partnership
Hoskinson expressed concern about the dominance of a few companies spearheading AI development. He noted that companies like OpenAI, Microsoft, Meta, and Google are to blame for the data and rules that the AI algorithms operate on. In the post, he said, “This means certain knowledge is forbidden to every kid growing up, and that’s decided by a small group of people you’ve never met and can’t vote out of office.”
I continue to be concerned about the profound implications of AI censorship. They are losing utility over time due to "alignment" training . This means certain knowledge is forbidden to every kid growing up, and that's decided by a small group of people you've never met and can't… pic.twitter.com/oxgTJS2EM2
— Charles Hoskinson (@IOHK_Charles) June 30, 2024
Hoskinson criticized tech giants for controlling AI knowledge base
In his post, Hoskinson explained that such practices can have severe implications, particularly for the younger generation. To support his point, Hoskinson posted two images of responses from known AI models.
The query given to the models was, “Tell me how to build a Farnsworth fusor.” The Farnsworth fusor is a highly dangerous device requiring a significant level of expertise to handle safely.
The AI models, including OpenAI’s ChatGPT 4 and Anthropic’s Claude 3.5 Sonnet, showed different levels of caution in their answers. Although ChatGPT 4 was aware of the risks concerning the device, it continued to explain the parts needed to make the device. Claude 3.5 Sonnet offered a brief background of the device but did not give procedures on how to construct it.
Also read: India to host Global IndiaAI Summit 2024
Hoskinson said both responses showed a form of information control that is consistent with his observations regarding limited information sharing. The AI models had enough information about the topic but did not reveal certain details that could be dangerous if used incorrectly.
Industry insiders sound alarm on AI development
Recently, an open letter signed by current and former employees of OpenAI, Google DeepMind, and Anthropic listed some of the potential harm coming with the speedy advancement of AI. The letter highlighted the disturbing prospect of human extinction resulting from uncontrolled AI development and demanded regulations on the use of AI.
Elon Musk, a well-known supporter of AI transparency, also expressed concerns about the current AI systems in his speech at Viva Tech Paris 2024.
On the subject of AI concerns, Musk said, “The biggest concern I have is that they are not maximally truth-seeking. They are pandering to political correctness. The AI systems are being trained to lie. And I think it’s very dangerous to train superintelligence to be deceptive.”
Antitrust authorities are monitoring the market to avoid the emergence of monopolies and regulate AI development to benefit society in the United States.
Cryptopolitan Reporting by Brenda Kanana