OpenAI’s CEO, Sam Altman, has resigned from the company’s internal Safety and Security Committee, a group established in May to oversee critical safety decisions. Altman’s departure comes at a time when U.S. lawmakers and former employees of OpenAI raised concerns over the safety and legal compliance of the company.
The Safety and Security Committee will now work as a separate board-level committee that monitors safety and security issues. This shift has appointed Zico Kolter, a professor from Carnegie Mellon University, as its chair, alongside Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony executive Nicole Seligman. All of them are current members of the OpenAI board. The group will still have the power to hold back the release of AI models until safety concerns are sorted out.
Critics question OpenAI’s priorities as key staff depart
In a blog post after Altman’s departure, OpenAI restated its focus on safety and noted that the committee had previously approved the safety of the newest model, o1. The group will also regularly receive further information from the company’s safety and security personnel to keep the members updated and informed on future releases of AI models. It also said it intends to enhance safety measures by adopting technical reviews and well-outlined objectives for model launches.
This comes in the wake of criticisms, especially from five US senators who expressed concern over the company’s safety measures in the summer. Critics have also noted that more than half of the staff tasked with dealing with the long-term consequences of AI have left the company. Some said that Altman is more concerned with the business aspect of AI than the regulation of the technology.
OpenAI’s expenditures have been on the rise. In the first half of 2024, the company spent $800,000 on federal lobbying, which is a significant increase compared to the $260,000 spent for the whole of 2023. Earlier this year, Altman became a member of the U. S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board. The committee is supposed to advise on the use of AI in the country’s critical systems.
There are still questions regarding the self-regulation of the AI giant. In an opinion piece for The Economist, OpenAI’s former board members Helen Toner and Tasha McCauley have raised doubts about the company’s ability to police itself. They claimed that profit incentives could undermine OpenAI’s self-governance, which they think can be at odds with the long-term safety goals.
OpenAI seeks massive funding and plans shift to for-profit model
OpenAI is reportedly in discussions to secure a massive new round of funding that could push its valuation to $150 billion. In a recent report by Cryptopolitan, OpenAI plans to secure $6.5 billion from investors at its current valuation. OpenAI is reportedly planning to transform its current operational model towards a traditional for-profit model by 2025.
OpenAI’s CEO Sam Altman revealed the changes to the staff during a recent meeting but did not reveal much about what to expect. Altman said the company had ‘grown too large’ for the current structure and was prepared to move away from the non-profit model.