Sun. Dec 22nd, 2024
Four Key Takeaways From The New Global Ai Security Guidelines

AI security guidelines developed by the US Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber ​​Security Center (NCSC) Published on Monday It has support from 16 other countries. The 20-page document was created in collaboration with experts including Google, Amazon, OpenAI, and Microsoft, and is the first document of its kind to achieve global consensus. According to NCSC.

NCSC CEO Lindy Cameron said in an official statement: “AI is developing at an incredible pace, and keeping up will require concerted international action across governments and industries. I am aware that there is,” he said. “These guidelines are an important step in creating a truly global, common understanding of cyber risks and mitigation strategies for AI to ensure that security is a core requirement across the board, rather than an afterthought in development. It will be.”

Here are four key takeaways from this publication:

1. “Secure by design” and “secure by default” are prioritized

Emphasized throughout this document are the principles of “secure by design” and “secure by default,” a proactive approach to protecting AI products from attacks. The authors urge his AI developers to prioritize security alongside functionality and performance through decision-making, such as when choosing model architectures and training datasets. We also recommend that your products default to the most secure options and that you clearly communicate the risks of alternative configurations to your users. Ultimately, the guidelines say, developers should be responsible for downstream outcomes and should not rely on customers to take the reins of security.

Key excerpts: “Users (whether “end users” or providers that incorporate external AI components) typically have sufficient knowledge to fully understand, assess, and address the risks associated with the systems they are using. They don’t have the visibility or expertise to do so. Therefore, in line with the principle of ‘safety by design’, providers of AI components must be held accountable for the security consequences of their users further downstream in their supply chain. ”

2. Complex supply chains require more attention

AI tool developers often rely on third-party components such as base models, training datasets, and APIs when designing their own products. Extensive networks of suppliers increase the attack surface where one “weak link” can negatively impact product security. The Global AI Guidelines recommend that developers evaluate these risks when deciding whether to source components from a third party or manufacture them in-house. When working with third parties, developers should vet and monitor their suppliers’ security posture, hold them to the same security standards as their own organizations, and implement scanning and isolation of imported third-party code. It is stated in the guidelines.

Key excerpts: “If security standards are not met, we are prepared to fail over to alternative solutions for mission-critical systems using resources like NCSC. Supply chain guidance It also includes frameworks such as Supply Chain Levels for Software Artifacts (SLSA) for tracking supply chain and software development life cycle certifications. ”

3. AI faces unique risks

AI-specific threats such as prompt injection attacks and data poisoning require unique security considerations, some of which are highlighted in CISA and NCSC guidelines. Components in a “secure-by-design” approach include integrating guardrails around model outputs to prevent leakage of sensitive data and restricting the behavior of AI components used for tasks such as file editing. included. Developers should incorporate AI-specific threat scenarios into their tests and monitor user input that attempts to exploit the system.

Key excerpts: “The term ‘adversarial machine learning’ (AML) is used to describe the exploitation of fundamental vulnerabilities in ML components such as hardware, software, workflows, and supply chains. Users can cause unintended behavior in ML systems, such as:

  • Impact on model classification or regression performance
  • Allow users to perform unauthorized actions
  • Extracting sensitive model information.”

4. AI security needs to be continuous and collaborative

This guidelines document outlines best practices across four lifecycle stages: design, development, deployment, and operations and maintenance. The fourth stage focuses on the importance of continuously monitoring deployed AI systems to watch for changes in model behavior and suspicious user input. The principle of “secure by design” remains an important component of software updates, and guidelines recommend automatic updates by default. Finally, CISA and NCSC recommend that developers leverage feedback and information sharing with the larger AI community to continually improve their systems.

Key excerpts: “If necessary, we will escalate the issue to the broader community, for example by issuing a bulletin in response to the vulnerability disclosure, including a detailed and complete enumeration of common vulnerabilities. , we will take prompt and appropriate measures to remediate the situation.”