IPFS News Link • Robots and Artificial Intelligence

OpenAI Safety Experts Are Fleeing The Company

• https://www.technocracy.news, By: Lucas Nolan

Fortune reports that Daniel Kokotajlo, a former OpenAI governance researcher, recently revealed that nearly half of the company's staff focused on the long-term risks of superpowerful AI have left in the past several months. The departures include prominent researchers such as Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, Todor Markov, and cofounder John Schulman. These resignations followed the high-profile exits of chief scientist Ilya Sutskever and researcher Jan Leike in May, who co-headed the company's "superalignment" team.

OpenAI, founded with the mission to develop AGI in a way that "benefits all of humanity," has long employed a significant number of researchers dedicated to "AGI safety" – techniques for ensuring that future AGI systems do not pose catastrophic or existential dangers. However, Kokotajlo suggests that the company's focus has been shifting towards product development and commercialization, with less emphasis on research to ensure the safe development of AGI.

Kokotajlo, who joined OpenAI in 2022 and quit in April 2023, stated that the exodus has been gradual, with the number of AGI safety staff dropping from around 30 to just 16. He attributed the departures to individuals "giving up" as OpenAI continues to prioritize product development over safety research.

The changing culture at OpenAI became more apparent to Kokotajlo before the boardroom drama in November 2022, when CEO Sam Altman was briefly fired and then rehired, and three board members focused on AGI safety were removed. Kokotajlo felt that this incident sealed the deal, with no turning back, and that Altman and president Greg Brockman had been consolidating power since then.


JonesPlantation