Anthropic is program Fund the development of new types of benchmarks that can evaluate the performance and impact of AI models, including generative models like our own Claude.
Anthropik’s program, announced Monday, will pay third-party organizations that can “effectively measure the advanced capabilities of AI models,” as the company put it in a blog post. Interested parties can submit applications to be evaluated at any time.
“Our investments in these assessments are intended to advance the overall field of AI safety and provide valuable tools to benefit the entire ecosystem,” Anthropik wrote in a blog post. “Developing high-quality, safety-related assessments remains challenging, and demand continues to outstrip supply.”
As we’ve noted before, AI has a benchmarking problem: Today’s most commonly cited AI benchmarks don’t adequately capture how the average person actually uses the systems they test, and some benchmarks, especially those released in the early days of modern generative AI, raise questions about whether they’re measuring what they purport to measure, given their age.
Anthropic’s proposed very high-level, and perhaps more difficult than it sounds, solution is to create challenging benchmarks focused on the security and societal impact of AI through new tools, infrastructure, and methods.
The company specifically seeks tests to evaluate a model’s ability to accomplish tasks such as conducting cyberattacks, “powering” weapons of mass destruction (such as nuclear weapons), and manipulating or deceiving the public (e.g. through deepfakes and disinformation). Regarding AI risks related to national security and defense, Anthropic said it is working on developing an “early warning system” to identify and assess risks, though the blog post did not specify what such a system would entail.
Antropic also said the new program intends to support research into benchmarking and “end-to-end” tasks that explore AI’s potential in supporting scientific research, multilingual conversations, and mitigating the toxicity of deep-rooted bias and self-censorship.
To make all this happen, Antropic envisions a new platform where experts can develop their own evaluations and conduct large-scale testing of their models involving “thousands” of users. The company says it has hired a full-time coordinator for the program and may buy or expand projects it sees as having the potential to scale.
“We offer a range of funding options tailored to the needs and stage of each project,” Anthropic said in the post, though an Anthropic spokesperson declined to provide further details about those options. “Teams will have the opportunity to engage directly with Anthropic’s domain experts from the Frontier Red Team, Fine-Tuning, Trust & Safety, and other relevant teams.”
Anthropic’s efforts to support new AI benchmarks are commendable — assuming, of course, that it has enough funding and talent committed to them — but it may be hard to fully trust the company given its commercial ambitions in the AI race.
In a blog post, Anthropique said that the specific evaluations it was funding were: AI Safety Classification that Developed (It will also take input from third parties, such as METR, a nonprofit AI research group.) This is within the company’s purview, but it also means that applicants to the program could be forced to accept definitions of “safe” or “unsafe” AI that they may not agree with.
Some in the AI community are also likely to take issue with Anthropic’s references to “catastrophic” and “deceptive” AI risks, like the risks of nuclear weapons. Many experts Experts say there’s little evidence to suggest that AI as we know it will gain the ability to end the world or outsmart humans anytime soon, and claims of impending “superintelligence” only serve to distract attention from today’s pressing AI regulation issues, such as AI’s tendency to hallucinate, they add.
In a post, Antropic wrote that it hopes its program will be a “catalyst for progress toward a future where comprehensive AI evaluation is the industry standard.” This is in keeping with many open, No company affiliation Efforts to create better AI benchmarks are sympathetic, but it remains to be seen whether such efforts will be willing to work with AI vendors whose ultimate loyalty is to shareholders.