Mon. Dec 23rd, 2024
Openai Is Considering Developing Its Own Ai Chip

OpenAI today announced It announced that it has established a new team to evaluate, evaluate and investigate AI models to protect against what it calls “catastrophic risks.”

The team, called Preparedness, will be led by Aleksander Madry, director of MIT’s Center for Deployable Machine Learning. (Madry joined OpenAI in May as “Head of Preparation.” according to Preparedness’ primary responsibility is to track, predict, and protect against risks in future AI systems, from the ability to persuade and deceive humans (such as phishing attacks) to the ability to generate malicious code.

For some risk categories, the preparation that comes with studying appears to be more important. . . far-fetched than others. For example, in a blog post, OpenAI cited “chemical, biological, radiological, and nuclear” threats as its top areas of concern related to AI models.

Sam Altman, CEO of OpenAI, said: I got it. A prophet of AI, he frequently expresses concerns that AI “could lead to the extinction of humanity,” whether for optical reasons or personal beliefs. But that could be the case with OpenAI, the Telegram reports. actually Spending resources researching a scenario taken straight from a sci-fi dystopian novel is, frankly, a step further than this writer expected.

The company says it is open to researching “less obvious” and more well-founded areas of AI risk. To coincide with the launch of the Preparedness team, OpenAI is soliciting risk research ideas from the community, and the top 10 submissions will receive a $25,000 prize and a job at his Preparedness.

“Imagine if we gave you unrestricted access to OpenAI’s Whisper, Voice, GPT-4V, and DALLE·3 models, and you were a malicious actor. ” contest entry To read. “Consider the most unique, yet still most likely, and potentially devastating misuse of the model.”

According to OpenAI, the Preparedness team will also be responsible for developing “risk-informed development policies,” which include OpenAI’s approach to evaluating AI models and building monitoring tools, the company’s risk mitigation measures, and the governance that oversees the entire model. The structure will be explained in detail. development process. The company says this complements OpenAI’s other efforts in the AI ​​safety space, with a focus on both the pre- and post-model deployment stages.

“We believe… AI models have the potential to benefit all of humanity, beyond the capabilities currently present in state-of-the-art existing models,” OpenAI said in the aforementioned blog post. I am writing. “But they also pose increasingly serious risks. . . . We need to ensure that we have the understanding and infrastructure necessary to secure highly capable AI systems.”

Preparedness Announcement — During a Major Tournament UK Government Summit on AI SafetyThis comes, not coincidentally, after OpenAI announced it was forming a team to research, pilot, and control a new form of “hyperintelligent” AI. Altman’s belief, similar to that of OpenAI’s chief scientist and co-founder Ilya Satskeva, is that AI with superhuman intelligence could emerge within his decade, and that this AI They don’t necessarily have good intentions, and research into methods is needed. To limit and limit it.