Over the past year, industry has driven significant advances in AI capabilities. As these advances accelerate, new academic research on AI safety is needed. To address this gap, the Forum and its philanthropic partners have created a new AI Safety Fund to support independent researchers around the world from academic institutions, research institutions, and start-ups. Initial funding for the AI Safety Fund is made possible through the generosity of Anthropic, Google, Microsoft, OpenAI, and philanthropic partners the Patrick J. McGovern Foundation and David and Lucille’s Packard Foundation.[^footnote-1], Eric Schmidt, Jaan Tarin. Combined, his initial funding will exceed $10 million. We look forward to further contributions from other partners.
Earlier this year, Forum members signed a voluntary AI Commitment at the White House that includes a pledge to accelerate third-party discovery and reporting of vulnerabilities in AI systems. The Forum believes that the AI Safety Fund is an important part of delivering on this commitment by providing the external community with funding to better assess and understand frontier systems. The global debate on AI safety and the general AI knowledge base would benefit from a broader range of voices and perspectives.
The primary focus of this fund is to support the development of new model evaluation and techniques for Red Team AI models, which support the development and testing of evaluation techniques for potentially hazardous features of frontier systems. Increased funding in this area will help improve safety and security standards and provide insight into the mitigations and controls needed to help industry, government and civil society meet the challenges posed by AI systems. I believe that.
The fund plans to solicit proposals in the coming months. Meridian Institute will administer the fund. Its activities will be supported by an advisory board comprised of independent external experts, experts from AI companies, and individuals with grant-writing experience.