Google announces expansion of vulnerability bounty program (VRP) Provides compensation to researchers who discover tailored attack scenarios for generative artificial intelligence (AI) systems to enhance the safety and security of AI.
Google’s Rory Richardson and Royal Hansen said, “Generative AI raises new concerns that are different from traditional digital security, including the potential for unwarranted bias, model manipulation, and misinterpretation (illusion) of data.” cause it.” Said.
Some of the categories covered include These include instant injection, leaking sensitive data from training datasets, model manipulation, adversarial perturbation attacks that cause misclassification, and model theft.
Notably, in early July of this year, Google announced that AI red team Helps address threats to AI systems as part of the Secure AI Framework (SAIF).
As part of our efforts to secure AI, we also announced efforts to strengthen the AI supply chain through existing open source security initiatives such as Supply Chain Levels for Software Artifacts (SLSA) and Sigstore.
“Digital signatures such as Sigstore allow users to verify that the software has not been tampered with or replaced,” says Google Said.
“Metadata such as SLSA provenance, which describes what the software is about and how it was built, enables consumers to ensure license compatibility, identify known vulnerabilities, and detect more advanced threats. ”
Development is done as OpenAI announced New in-house readiness team to “track, assess, predict, and protect against” catastrophic risks to generative AI across cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats.
The companies, along with Anthropic and Microsoft, $10 million AI Safety Fundfocuses on promoting research in the field of AI safety.