Has Safety Taken a Back Seat at OpenAI?

After co-founder Ilya Sutskever left the firm earlier this week, Jan Leike, a prominent researcher, announced on Friday morning that “safety culture and processes have taken a backseat to shiny products” at the company.

Jan Leike said in a series of posts on the social media platform X that he joined the San Francisco-based startup because he believed it would be the best place to conduct AI research. 

Buy physical gold and silver online

Leike led OpenAI’s “Superalignment” team with a co-founder who also quit this week.

The Superalignment Team at OpenAI Is No More Intact

Leike’s Superalignment team was formed last July at OpenAI to address the main technical challenges in deploying safety measures as the company advances AI that can rationalize like a human. 

Leike’s statements came after a report by WIRED Chich claimed that OpenAI had dissolved the Superslignment team completely, which was tasked with addressing long-term risks associated with AI.

Also read: Openai’s chief scientist, Ilya Sutskever, bids farewell

Sutskever and Leike were not the only employees who left the company. At least five more of the most safety-conscious workers at OpenAI have left the company or been dismissed since last November, when the board attempted to dismiss CEO Sam Altman only to watch him play his cards to reclaim his position.

OpenAI Should Become a Safety-First AGI Company

Leike pointed to the most contentious feature of the technology across several platforms—a prospective picture of robots that are either as generally intelligent as humans or at least capable of performing many tasks just as well—writing that OpenAI needs to transform into a safety-first AGI company.

In response to Leike’s posts, Sam Altman, CEO of Open AI, expressed his gratitude for Leike’s services to the company and expressed his sadness at his departure.

Altman said in an X post that Leike is correct and that he would write a lengthy post on the topic in the coming days. He also said that,

“We have a lot more to do; we are committed to doing it.” 

Leike has left OpenAI’s superalignment team, and John Schulman, a co-founder of the business, has taken over.

But there’s a hollowness about the team. Also, Schulman is already overburdened by his full-time work guaranteeing the security of OpenAI’s existing products. How much more significant, future-focused safety work is it possible for OpenAI to produce? There seems to be no satisfactory answer to this.

Jan Leike Has Ideological Differences With Management

As the organization’s name, OpenAI, already suggests, originally intended to share its models freely with the public, the business now says that making such potent models available to anyone could be harmful, thus the models have been turned into proprietary knowledge.

Leike said in a post that he disagreed with OpenAI leadership about the priorities they have been enforcing on the company for quite some time, until they finally reached a tipping point.

Leike’s last day at the company was Thursday, after which he resigned and didn’t sugarcoat his resignation with any warm send-offs or any hint of confidence in OpenAI leadership. On X, he posted, “I resigned.”

A follower of Leike said in a comment that he is delighted Leike is no longer a member of their squad. The woke ideologies are not in line with humanity. The less aligned it gets, the more it is put into AI. 

Also read: OpennAI secure Reddit content for ChatGPT improvement

The follower said that he would also require a definition of alignment from all aligners. He was pointing to a recommendation in which Leike asked other employees of OpenAI that he believes they can ship the cultural change that is needed at the company. 

The world’s leading AI company seems to be changing its course regarding safety measures often stressed by experts, and the departure of top safety experts from the company likely confirms it.


Cryptopolitan reporting by Aamir Sheikh

About the author

Why invest in physical gold and silver?
文 » A