In two podcasts, former OpenAI employees expressed concerns about the evolution of artificial intelligence (AI) and the people who control it, including their former boss, Sam Altman.
William Sanders appeared on the Humane Technology Center’s podcast “Your Undivided Attention” in June and interviewed tech journalist Alex Kantrowitz earlier this month, in which he spoke about the process steps of machine learning and said he was “concerned that things aren’t being done right” because “no human being can understand” the true process behind AI.
“They’re on a trajectory to change the world, but their priorities when it comes to releasing something are more like a product company,” Sanders told Kantrowitz about Open AI. “It’s that combination of visions that concerns me.”
According to his LinkedIn page, Sanders was a member of technical staff at OpenAI from February 2021 to February 2024. He worked on the Superalignment team, which is essentially a unit tasked with controlling AI systems that may one day be smarter than humans. He then moved on to understand what goes into the large-scale language models (LLMs) that power systems like ChatGPT.
Sanders is one of 13 former and current employees of OpenAI and Google DeepMind who Right to warnsaid it’s important for people in the AI community to voice their concerns about the rapidly developing technology.
He resigned, citing concerns about the way the company prioritized product launches over safety and privacy concerns.
“I really didn’t want to work for the Titanic of AI, so I quit,” Sanders said on Kantrowitz’s podcast. “During my three years at OpenAI, I would ask myself at times: Was OpenAI on a path closer to the Apollo missions or closer to the Titanic?”
Sanders called OpenAI’s product release “most disturbing.”
Altman, co-founder and CEO of OpenAI and the public face of AI, recently hinted at plans for future iterations of ChatGPT. Speaking at the 20th Aspen Ideas Festival in Colorado, Altman said the development of increasingly advanced AI, including AGI (artificial general intelligence), is “inevitable.”
While Sanders said he doesn’t personally think he’s “working on the Titanic” at OpenAI, he said he thinks future versions of chatbots like GPT-5 and GPT-6 could end up similar to the sinking ship, which was notorious for putting safety concerns on the back burner.
Sanders said there are “a lot of trade-offs” involved in launching new products, and while he noted that OpenAI will delay launch dates to conduct additional safety studies, he said there were “multiple instances” where “I and others at the company felt there was a pattern of pressuring people to ship and compromising our processes around safety and preventable issues that have arisen in the world.”
“These decisions may be where there are areas where we can do additional work to improve the product, or there are areas that we’re rushing and not as solid, and they’re being made to meet our shipping dates,” Sanders said.
People don’t really understand how these LLMs work, including the engineers who build them, he said, and the technology “remains in a state where no one, including the folks at OpenAI, knows what the next frontier model will be able to do when we start training it, or even when it’s finished.”
Sanders gave the analogy of airplanes: If a company designs and builds a plane for short flights over land, but then decides to fly it over the ocean, it needs to test the plane to make sure it can withstand certain conditions.
“I,
“It’s the difference between trying to prevent a problem versus trying to prevent a problem after it occurs,” he said. “The problem is so big, I don’t want to see the first AI equivalent of a plane crash. I don’t know to what extent we can prevent this, but I really hope that we do everything we can.”
Sanders’ comments come as OpenAI recently restructured its safety team, with Altman now heading up the department himself and two other directors.
At the same time, Altman co-founded a new AI Ethics Council as a kind of non-binding AI governance model aimed at ensuring that AI systems operate under an ethical framework and that traditionally under-represented communities have a say in the technological revolution.
Sanders, meanwhile, accused Altman of having a “history of unethical but technically legal behavior.” When Altman was temporarily removed from his position by OpenAI’s board of directors in November last year, but returned five days later, board members said they had lost confidence in him.
“What I would have liked Open AI to do, and what I believed they would be more willing to do, is to take the time to get this right,” Sanders said. “If we find there are problems with the system, then we’ll figure out how to fix them.”
Do you have a story? Newsweek Have questions about this article? Contact us at [email protected].
Rare knowledge
Newsweek is committed to challenging conventional wisdom, seeking common ground and finding connections.
Newsweek is committed to challenging conventional wisdom, seeking common ground and finding connections.