The UK’s National Cyber Security Center, the US’ Cybersecurity and Infrastructure Security Agency, and 16 other international organizations have released new guidelines for the security of artificial intelligence systems.
of Secure AI system development guidelines It is specifically designed to guide developers through the design, development, deployment, and operation of AI systems, ensuring that security remains a core component throughout the lifecycle. However, other stakeholders in AI projects should also find this information useful.
These guidelines were published shortly after world leaders committed to the safe and responsible development of artificial intelligence at the AI Safety Summit in early November.
Jump to:
Overview: Guidelines for developing secure AI systems
Guidelines for developing secure AI systems require that AI models, whether built from scratch or based on existing models or third-party APIs, “work as intended and are available when needed.” Recommendations are set forth to ensure that “the system functions without exposing sensitive data to unauthorized parties.” ”
See: Hiring Kit: Instant Engineer (TechRepublic Premium)
Key to this is the “secure by default” approach advocated in existing frameworks by the NCSC, CISA, the National Institute of Standards and Technology, and various other international cybersecurity organizations. These framework principles include:
- Attribute security outcomes to your customers.
- We value complete transparency and accountability.
- Build your organizational structure and leadership so that “Safe by Design” is a top business priority.
According to the NCSC, 21 government agencies and ministries from a total of 18 countries have confirmed their support for the new guidelines and will jointly sign on. This includes the US National Security Agency and the Federal Bureau of Investigation, as well as the Canadian Cyber Security Center, France’s Cyber Security Agency, Germany’s Federal Information Security Agency, Singapore’s Cyber Security Agency, and Japan’s National Incident Center. Masu. Cybersecurity preparedness and strategy.
NCSC Chief Executive Lindy Cameron said: press release: “We know that AI is developing at an incredible pace and that keeping up will require concerted international action across governments and industry. This is an important step in creating a truly global shared understanding of cyber risks and mitigation strategies for AI, ensuring that AI is a core requirement throughout development, rather than an afterthought.”
Securing the four key stages of the AI development lifecycle
The guidelines for secure AI system development are organized into four sections, each addressing a different stage of the AI system development lifecycle: secure design, secure development, secure deployment, and secure operation and maintenance. doing.
- safety design Provides guidance specific to the design phase of the AI system development lifecycle. It considers various topics and trade-offs in system and model design, and emphasizes the importance of being aware of risks and performing threat modeling.
- secure development Covers the development stages of the AI system lifecycle. Recommendations include ensuring supply chain security, maintaining thorough documentation, and effectively managing assets and technical debt.
- Safe deployment Addresses the deployment phase of AI systems. Guidelines here include protecting your infrastructure and models from breaches, threats, and loss, establishing processes for incident management, and adopting responsible release principles.
- Safe operation and maintenance Contains guidance on the post-deployment operation and maintenance phases of AI models. It covers aspects such as effective logging and monitoring, managing updates, and sharing information responsibly.
Guidance for all AI systems and related stakeholders
The guidelines apply to all types of AI systems, not just the “frontier” model that was heavily discussed at the AI Safety Summit in the UK on 1-2 November 2023. The guidelines also apply to all working professionals. Inside and outside of artificial intelligence, including developers, data scientists, managers, decision makers, and other AI “risk owners.”
“Although we primarily target this guideline to providers of AI systems using organization-hosted models (or using external APIs), we encourage all stakeholders to We encourage you to read these guidelines to help you make informed decisions about the development of “Deploying and Operating AI Systems”, NCSC Said.
The Guidelines for Developing Safe AI Systems are consistent with the G7 Hiroshima AI Process, the U.S. Voluntary AI Commitment, and the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence issued at the end of October 2023.
These guidelines signify a growing recognition among world leaders of the importance of identifying and mitigating the risks posed by artificial intelligence, especially in the wake of the explosive growth of generative AI. To do.
Building on the results of the AI Safety Summit
Representatives from 28 countries signed the agreement during the AI Safety Summit held at Bletchley Park historic site in Buckinghamshire, UK. Bletchley Declaration on AI SafetyThis highlights the importance of designing and deploying AI systems safely and responsibly, with an emphasis on collaboration and transparency.
The declaration recognizes the need to address the risks associated with cutting-edge AI models, particularly in areas such as cybersecurity and biotechnology, and calls for increased international cooperation to ensure the safe, ethical and beneficial use of AI. is advocated.
Michelle Donnellan, the UK science and technology secretary, said the newly published guidelines will “put cybersecurity at the heart of AI development” from inception to deployment.
“Just weeks after we brought together world leaders at Bletchley Park to reach the first international agreement on safe and responsible AI, we are once again uniting nations and businesses,” Donnellan said in an NCSC press release. We are advancing this truly global effort,” said an NCSC press release.
“In doing so, we will harness the technology that has defined this decade and seize the potential to transform the NHS, revolutionize public services and create new high-skilled, high-paid jobs of the future. We’re advancing our mission.”
Cybersecurity industry response to these AI guidelines
The publication of the AI guidelines has been welcomed by cybersecurity experts and analysts.
Toby Lewis, Darktrace’s global head of threat analysis, said the guidance was a “welcome blueprint” for secure and reliable artificial intelligence systems.
“The guidelines emphasize the need for AI providers to protect their data and models from attackers and for AI users to apply the right AI to the right tasks,” Lewis said in an email. I’m glad to see that.” Those building AI need to go the extra mile and build trust by taking users along the journey of the AI to its answers. Security and trust help more people realize the benefits of AI, faster. ”
Meanwhile, Informatica’s vice president for Southern Europe, Georges Anizal, said the publication of the guidelines was “an important step in addressing the cybersecurity challenges inherent in this rapidly evolving field.”
Anijar said in an emailed statement: “This international commitment recognizes the critical intersection of AI and data security and reinforces the need for a comprehensive and responsible approach to both innovation and the protection of sensitive information. It is encouraging to see the global recognition of the importance of putting security measures at the core and promoting a safer digital environment for both businesses and individuals.”
He continued, “Building security into AI systems from the beginning resonates deeply with the principles of secure data management. As organizations increasingly harness the power of AI, they need to ensure that the data underpinning these systems is used to its full potential.” It is essential that it is treated with security and integrity.”