Mon. Dec 23rd, 2024
Exclusive: Openai Researchers Alerted Board To Ai Breakthrough Ahead Of

Nov. 22 (Reuters) – Ahead of the four-day exile of OpenAI CEO Sam Altman, several staff researchers tell the board of directors that there is a powerful artificial intelligence that could threaten humanity. It sent a letter warning about the intelligence discovery, two people familiar with the matter told Reuters. .

The previously unreported letter and the AI ​​algorithm were key developments before the board fired Mr. Altman, the generative AI prodigy, two people said. Before his triumphant return late Tuesday, more than 700 employees had threatened to resign in solidarity with their fired leader and join support group Microsoft (MSFT.O).

Officials cited the letter as one factor in the board’s long list of complaints that led to Altman’s firing, including concerns about commercializing progress before understanding the consequences. It is said to have been included. Reuters was unable to see a copy of the letter. The staff member who wrote the letter did not respond to requests for comment.

Contacted by Reuters, OpenAI declined to comment, one of the people said, but in an internal message to staff it announced the project, called Q*, and a letter to the board before the weekend event. He admitted that he had sent the . An OpenAI spokesperson said the message was sent by longtime executive Mira Murati to alert staff about a specific media article, without commenting on its accuracy.

Some within OpenAI believe Q* (pronounced cue-star) could lead to breakthroughs in the company’s research into what is known as artificial general intelligence (AGI), people familiar with the matter said. one person told Reuters. OpenAI defines AGI as autonomous systems that exceed humans at the most economically valuable tasks.

Given vast computing resources, the new model was able to solve specific mathematical problems, said the person, who requested anonymity because he was not authorized to speak on behalf of the company. Although he could only do elementary school-level math, passing such tests made researchers very optimistic about Q*’s future success, officials said.

Reuters was unable to independently verify Q*’s functionality as claimed by the researchers.

“Veil of Ignorance”

Researchers believe that mathematics is the frontier for developing generative AI. Today, generative AI is good at writing and language translation by statistically predicting the next word, and the answers to the same question can be very different. However, overcoming the ability of mathematics to have only one correct answer means that AI will have better reasoning abilities similar to human intelligence. AI researchers think this could have applications in new scientific research, for example.

Unlike a calculator, which can solve a limited number of operations, AGI can be generalized, learned, and understood.

In a letter to the board, the researchers warned of AI’s impressive capabilities and potential dangers, but officials were unclear about the exact safety concerns outlined in the letter. He said he did not do so. Computer scientists have long debated the dangers posed by highly intelligent machines, such as whether they would decide that it would be in their own interests to eliminate humanity.

Researchers have also flagged a study by a team of “AI scientists,” whose existence has been confirmed by multiple sources. The group was founded by combining the earlier “Code Gen” and “Math Gen” teams to explore ways to optimize existing AI models to improve inference and ultimately perform scientific research. one of the people involved said.

Mr. Altman led the effort to make ChatGPT one of the fastest growing software applications in history, and extracted the necessary investment and computing resources from Microsoft to move it closer to AGI.

In addition to announcing a slew of new tools at this month’s demonstration, Altman hinted at the World Leaders’ Summit in San Francisco last week that he believes big progress is on the horizon.

“Four times in OpenAI’s history, most recently in the last few weeks, I’ve had the privilege of being in the room at a time when we’ve pushed back the veil of ignorance and advanced the frontiers of discovery. “Achieving this is a once-in-a-lifetime professional honor,” he said at the Asia-Pacific Economic Cooperation summit.

The next day, the board fired Mr. Altman.

Anna Tong and Jeffrey Dustin in San Francisco and Crystal Hu in New York. Edited by Kenneth Li and Lisa Shumaker

Our standards: Thomson Reuters Trust Principles.

Obtaining license rightsopens a new tab

Anna Tong is a San Francisco-based Reuters correspondent reporting on the technology industry. She joined Reuters in 2023 after working as a data editor at the San Francisco Standard. Tong previously worked as a product manager at a technology startup, and at Google where she helped research user insights and run a call center. Mr. Tong graduated from Harvard University. Contact:4152373211

Jeffrey Dustin is a Reuters correspondent based in San Francisco, reporting on the technology industry and artificial intelligence. He joined Reuters in 2014, initially writing about airlines and travel in the New York bureau. Dustin graduated from Yale University with a degree in history. He was part of a team investigating lobbying activities by Amazon.com around the world, for which he received a SOPA award in 2022.

Crystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and people, with a focus on growth-stage startups, technology investing, and AI. She has previously covered M&A for Reuters, breaking stories on President Trump’s SPACs and Elon Musk’s Twitter fundraising. She previously reported on Amazon for Yahoo Finance, and her research into the company’s retail operations was cited by members of Congress. Crystal began her journalism career writing about Chinese technology and politics. She has a master’s degree from New York University and enjoys matcha ice cream as much as drinking matcha ice cream at her workplace.