Mon. Dec 23rd, 2024
Six Steps Towards Ai Security

As every company looks to develop its AI strategy in the wake of ChatGPT, that work quickly raises the question, “What about security?”

Some people may be overwhelmed by the prospect of securing new technology. The good news is that the policies and practices currently in place are a good starting point.

In fact, the way forward lies in extending the existing foundations of enterprise and cloud security. This is a journey that can be summarized in his six steps.

  • Expand your threat analysis
  • Expand the reaction mechanism
  • Secure your data supply chain
  • Scale your efforts with AI
  • be transparent
  • Create continuous improvement
AI security builds on the protections businesses already have.

Experience the expansive horizon

The first step is to get used to your new environment.

Security now needs to cover the AI ​​development lifecycle. This includes new attack surfaces such as training data, models, and the people and processes that use them.

Extrapolate from known types of threats to identify and predict new threats. For example, an attacker could access data while training a model on a cloud service and attempt to change the behavior of the AI ​​model.

Security researchers and red teams who have investigated vulnerabilities in the past will once again be a great resource. Identifying and addressing new threats requires access to AI systems and data, and strong collaboration with data science staff.

expand your defenses

Once you have a clear picture of the threats, it’s time to define how to defend against them.

Carefully monitor the performance of your AI models. Just as you would expect traditional security defenses to be breached, expect them to drift and new attack surfaces open up.

It also builds on PSIRT (Product Security Incident Response Team) practices already in place.

For example, NVIDIA released Product security policy Covers the AI ​​portfolio.Several organizations — including Open World Wide Application Security Project — Released AI-aligned implementations of key security elements, including common vulnerability enumeration methods used to identify traditional IT threats.

Adapt and apply traditional defenses to your AI models and workflows, including:

  • Separate network control plane and data plane
  • Delete unsafe or personally identifiable data
  • Using Zero Trust Security and Authentication
  • Defining appropriate event logs, alerts, and tests
  • Configure flow control if necessary

Extend existing safeguards

Protect datasets used to train AI models. They are valuable and vulnerable at the same time.

Again, companies can leverage existing practices. Create secure data supply chains similar to those created to secure software channels. It’s important to establish access controls for training data, just as any other internal data is protected.

You may need to fill in some gaps. Security experts now know how to use application hash files to ensure that no one has modified the code. This process can be difficult to scale to the petabyte-scale datasets used for AI training.

The good news is that researchers recognize the need and are working to develop tools to address it.

Extend security with AI

AI is not only a new attack surface to defend against, but also a new and powerful security tool.

Machine learning models can detect subtle changes in mountains of network traffic that are invisible to humans. This makes AI the ideal technology to prevent many of the most widely used attacks, including identity theft, phishing, malware, and ransomware.

NVIDIA Morpheusis a cybersecurity framework that allows you to build AI applications that create, read, and update digital fingerprints that scan for different types of threats. Additionally, with generative AI and Morpheus, A new way to detect spear phishing attempts.

AI security use case chart
Machine learning is a powerful tool that spans many use cases in security.

Security values ​​clarity

Transparency is a key element of any security strategy. Inform customers about new AI security policies and implemented practices.

For example, NVIDIA publishes the following details about its AI model: NGC, a hub for accelerated software.called model cardthey act like statements that lend truth, explaining the AI, the data on which the AI ​​was trained, and the constraints on the use of the AI.

NVIDIA uses an expanded set of fields on model cards so that users have a clear understanding of the history and limitations of neural networks before deploying them in production. This helps strengthen security, establish trust, and ensure model robustness.

Define the journey, not the destination

These six steps are just the beginning of your journey. These processes and policies need to evolve.

For example, new practices of confidential computing extend security across cloud services where AI models are often trained and run in production.

The industry is already starting to see basic versions of code scanners for AI models. They are signs of things to come. Teams need to be on the lookout for best practices and tools as they emerge.

Along the way, the community should share what it learns. A great example of this happened during the recent Generative Red Team Challenge.

The ultimate goal is the creation of the right of collective self-defense. We are all on the path to AI security together, one step at a time.