The US AI Safety Institute, under the Department of Commerce, announced five new members for its leadership team. Among the five new members is one who predicted that there is a 50% chance that artificial intelligence development will end up humanity. But this is not his only introduction; there are some very important innovations under his belt.
New members of the AI safety team
There has been an argument going on for quite some time that artificial intelligence will become an existential threat for humans, but many experts say it is exaggerated and overhyped, and this type of prediction could hinder advancements in the field. The task at hand for the newly appointed team is to encourage development in a way so that AI becomes beneficial for people, against this perception of harm under the National Institute of Standards and Technology (NIST).
NIST Director and Under Secretary of Commerce for Standards and Technology, Laurie E. Locascio, said,
“I am very pleased to welcome these talented experts to the U.S. AI Safety Institute leadership team to help establish the measurement science that will support the development of AI that is safe and trustworthy. They each bring unique experiences that will help the institute build a solid foundation for AI safety going into the future.”
Among the new members is Adam Russell, chief vision officer, who is also director of the AI division of the Information Sciences Institute at the University of California. Prof. Rob Reich will be the senior adviser, he is from Stanford, and works as associate director of the Institute for Human-Centered AI, and is also an expert in political science.
Mara Quintero Campbell is appointed as chief operating officer and chief of staff, she is currently at Commerce’s Economic Development Administration as deputy chief operating officer. Mark Latonero will be head of international engagement, he is from the White House Office of Science and Technology Policy and works as deputy director of the National AI Initiative Office.
AI doomer on board—but there is more to it
The new member called an AI doomer is none other than Paul Christiano. He is appointed as the team’s head of AI safety. He is a former OpenAI researcher and founder of the Alignment Research Center. He is also the inventor of a foundational AI safety technique known as reinforcement learning, which incorporates human feedback. He got his label as an AI doomer after he was asked on a podcast about the chance of human’s AI ambitions ending in doom, to which he showed quite a concern about humanity getting doomed, he said
“I think maybe there’s something like a 10-20% chance of AI takeover, many most humans dead, I take it quite seriously.”
He also said,
“Overall, maybe you’re getting more up to a 50/50 chance of doom shortly after you have AI systems that are human level.”
Source: Bankless podcast.
Christiano expressed these thoughts at the Bankless podcast with hosts David Hoffman and Ryan Sean Adams, but he insists that his views are not as extreme as those of the famous AI doomer Eliezer Yudkowsky, who suggests a full-out doom scenario.
US Secretary of Commerce Gina Raimondo also said that in order to safeguard global leadership on responsible AI and to ensure AI safety and mitigate the risks associated with it, we need top talent from the nation, which is why these individuals are selected, who are best in their fields to now join the executive leadership team at the U.S. AI Safety Institute.
US Department of Commerce press release can be found here.