Governments and industry agree that while AI offers great potential to benefit the world, appropriate guardrails are needed to reduce risks. Significant contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (through the Hiroshima AI process), and others.
To build on these efforts, further work is needed on safety standards and evaluation to ensure that frontier AI models are developed and deployed responsibly. This forum provides a vehicle for cross-organizational discussion and action on AI safety and responsibility.
The Forum will focus on three key areas over the next year to support the safe and responsible development of frontier AI models:
- Identifying best practices: It focuses on safety standards and practices to reduce a wide range of potential risks and fosters knowledge sharing and best practices between industry, government, civil society and academia.
- Promoting AI safety research: Support the AI safety ecosystem by identifying the most important unresolved research questions regarding AI safety. The forum coordinates research to advance efforts in areas such as adversarial robustness, mechanistic interpretability, scalable monitoring, access to independent research, emergency action, and anomaly detection. Initially, the focus will be on developing and sharing a public library of technical assessments and benchmarks for frontier AI models.
- Facilitating information sharing between business and government: Establish reliable and secure mechanisms for sharing information about AI safety and risks between businesses, governments, and stakeholders. The Forum follows best practices for responsible disclosure from areas such as cybersecurity.
Kent Walker, President of Global Affairs, Google & Alphabet, said: “We are excited to collaborate with other leading companies and share our technical expertise to drive responsible AI innovation. We all work together to ensure AI benefits everyone. is needed.”
Brad Smith, vice chairman and president of Microsoft, said:: “Companies developing AI technology have a responsibility to ensure that it is safe and securely under human control. This is an important step in advancing AI and uniting the technology industry to address challenges.”
Anna Makanju, Vice President of Global Affairs at OpenAI, said: “Advanced AI technologies have the potential to bring tremendous benefits to society, and realizing this potential requires oversight and governance.” It is important that we work together on the foundation of We are well-positioned to act quickly to improve safety.”
Dario Amodei, Anthropic CEO Said: “At Anthropic, we believe that AI has the potential to fundamentally change the way the world works. We work with industry, civil society, government, and academia to advance safe and responsible technology development. We’re excited to do this. The Frontier Model Forum will play an important role in coordinating best practices and sharing research on frontier AI safety.”