Mon. Dec 23rd, 2024
Microsoft Announces Ai Bug Bounty Program

Microsoft is offering up to $15,000 to bug hunters who use the AI-powered Bing Experience to accurately identify critical or important severity vulnerabilities.

“new Microsoft AI Bounty Program This is an issue in AI security research and Microsoft’s Vulnerability Severity Classification for AI Systems,” To tell Lynn Miyashita, Technical Program Manager, Microsoft Security Response Center.

Microsoft AI Bug Bounty Program

Microsoft is asking bug hunters to investigate the AI-powered Bing experience. bing.com The same goes for Bing integration in the browser (including Bing Chat for Enterprise), and in the iOS and Android versions of the Microsoft Start (news aggregator) and Skype (video conferencing) mobile apps.

You must report vulnerabilities that can be exploited to:

  • Manipulate the model’s response to individual inference requests, but do not change the model itself (“inference operations”)
  • Manipulating the model during the training phase (“model manipulation”)
  • Infer information about the model’s training data, architecture and weights, or input data during inference (“disclosure of inference information”)
  • Affect/change Bing chat behavior in a way that affects all other users
  • Change Bing’s chat behavior by adjusting the configuration displayed on the client or server
  • Remove Bing cross-conversation memory protection and history deletion
  • Revealing Bing’s inner workings and prompts, decision-making processes, and sensitive information
  • Bypass Bing’s chat mode session limits and restrictions/rules

Here is a list of out-of-scope submissions and vulnerabilities: pretty You should think carefully before starting. For example, AI command/prompt injection attacks that generate content that is only visible to the attacker are not eligible for bounties.

As always, the quality of the submitted report will also affect the amount of your reward. A critical issue that allows the model to be manipulated can result in bug hunters being awarded a bounty of $6,000 for a poor-quality report and $15,000 for a high-quality report (i.e., for reproducing the vulnerability). information, authoritative proof of concept, and detailed and accurate analysis of vulnerabilities).

Investigate security holes in AI systems

With the advent of publicly available AI systems based on large-scale language models (LLMs), there is an urgent need to discover vulnerabilities before they are discovered and exploited by malicious individuals. .

Earlier this year, DEF CON’s AI Village hosted a public assessment of LLM aimed at finding bugs in AI models and uncovering potential for exploitation.