OpenAI’s Superalignment team is responsible for developing ways to manage and operate “super-intelligent” AI systems, and is committed to providing 20% of the company’s computing resources, according to team officials. It is said that there is But requests for some of that compute were often denied, hampering the team’s work.
The issue, among others, led to the resignation of several team members this week, including co-leader Jan Reich. He is a former DeepMind researcher who worked on ChatGPT, GPT-4, and ChatGPT’s predecessor, InstructGPT, during the OpenAI era.
Reich announced the reasons for his resignation Friday morning. “I disagreed with OpenAI’s leadership on the company’s core priorities for quite some time, but I finally reached a breaking point,” Reich wrote in a series of posts about X. Covers security, surveillance, readiness, safety, adversarial robustness, (hyper)coordination, confidentiality, social impact, and related topics for next-generation models. Solving these problems will be very difficult, and I fear we are not on track to get there. ”
OpenAI did not immediately respond to a request for comment regarding the resources promised and allocated to its team.
OpenAI formed the Superalignment team last July, led by Leike and OpenAI co-founder Ilya Sutskever, who also resigned from the company this week. It had an ambitious goal of solving the core technical challenge of controlling superintelligent AI over the next four years. The team, which includes scientists and engineers from OpenAI’s former alignment department and researchers from other organizations within the company, will contribute to research that will inform the safety of both internal and non-OpenAI models. Through initiatives such as our , research grant program , we solicit and share work from the broader AI industry.
The Superalignment team published a series of safety studies and successfully funneled millions of dollars in grants to outside researchers. But as his OpenAI executives’ bandwidth began to increase with product launches, the Super Alignment team realized they had to fight for more upfront investment. This investment was considered critical to the company’s stated mission to develop super-intelligent AI for the benefit of all humanity. .
“Building machines that are smarter than humans is an inherently risky endeavor,” Reich continued. “But in recent years, safety culture and processes have taken a backseat to shiny products.”
Sutskever’s battle with OpenAI CEO Sam Altman was an added distraction.
Sutskever, along with OpenAI’s former board of directors, abruptly moved to fire Altman late last year over concerns that he had not been “consistently candid” with board members. Under pressure from OpenAI’s investors, including Microsoft, and many of the company’s employees, Altman eventually returned, much of the board resigned, and Sutskever stepped down. reportedly I never went back to work.
Sutskever contributed to the Superalignment team, contributing to research as well as acting as a liaison to other departments within OpenAI, according to officials. He was also going to serve as an ambassador of sorts, impressing upon OpenAI’s key decision makers the importance of the team’s efforts.
Following the departures of Reich and Sutskever, another OpenAI co-founder, John Schulman, was moved to be responsible for the type of work that the superalignment team was doing, but with no dedicated team and a gradual This will be a team that collaborates. A group of researchers belonging to each department within the company. An OpenAI spokesperson described this as an “integration.” [the team] Deeper. ”
The concern is that as a result, OpenAI’s AI development will not be as safety-focused as expected.
Publish an AI newsletter.sign up here Start receiving it in your inbox starting June 5th.