Sun. Dec 22nd, 2024
Nist Researchers Warn About The Biggest Threats To Ai Security

As dozens of states race to establish standards for how their agencies use AI to increase efficiency and streamline public services, U.S. researchers National Institute of Standards and Technology A report released last week found that artificial intelligence systems that rely on large amounts of data to perform tasks can malfunction when exposed to unreliable data.

of reportPart of a broader effort by the institute to help develop trustworthy AI, cybercriminals can intentionally confuse or “poison” AI systems to make them work by exposing them to fraudulent data. It turns out that it can cause malfunctions. Additionally, the study states that there is no one-size-fits-all defense that developers and cybersecurity experts can implement to protect their AI systems.

“Data is very important for machine learning,” NIST computer scientist Apostol Vasilev, one of the authors of the publication, told StateScoop. “‘Garbage in, garbage out’ is a well-known catchphrase in the industry.”

To perform tasks like self-driving a vehicle or interacting with customers as an online chatbot, AI is trained on vast amounts of data, which helps the technology understand how best to respond in different situations. helps predict. For example, self-driving cars are trained on images of highways and roads with road signs, among other datasets. Chatbots can be exposed to recordings of online conversations.

Researchers believe that some AI training data (such as websites containing inaccurate information or unwanted interactions with the public) may be unreliable and may cause AI systems to behave in unintended ways. I warned. For example, a chatbot could learn to respond with abusive or racist language if guardrails are bypassed by carefully crafted malicious prompts.

Joseph Thacker is a principal AI engineer and security researcher. app omniThe company, whose security management software is used by state and local governments, says it’s important to consider the security protocols needed to protect against any potential attacks, such as those outlined in the NIST report. Stated.

“We need everyone’s help to secure it,” Tucker told StateScoop. “And I think people should think twice about that.”

‘malice’

The NIST report outlined four types of attacks against AI (poisoning, evasion, privacy, and exploitation) and categorized them based on criteria such as the attacker’s goals and objectives, capabilities, and system knowledge.

Poisoning occurs when an AI system is trained on corrupted data. For example, this happens when many instances of inappropriate language slip into a conversation recording, and the chatbot interprets those instances as common enough occurrences to use in customer interactions.

“Using the example of generative AI, if you are malicious and try to change some of the input data that goes into the model during training, the model will classify what is a cat, what is a dog, and all the rest. “In fact, it can learn perturbations that can cause the model to misclassify,” said Apostol Vassilev, one of the NIST computer scientists who wrote the report. explained.

But Tucker, who specializes in application security, hacking and AI, said that while data poisoning is a possibility, its scope is limited to the tool training stage, and other types of attacks (evasion, privacy, instant injection He argued that it is possible to exploit the system (in the form of Therefore, it is more likely.

“If you can bypass the filter, that’s an attack on the system. You’re bypassing the protections that are in place,” Tucker said of prompt injection, in which a malicious party tricks the system into voluntarily providing someone else’s data. “It will be.”

Tucker said prompt injection attacks are aimed at forcing chatbots to provide sensitive training data that they are programmed to withhold.

“If you can extract data directly from the model that’s been trained, which is often trained on all the data on the internet, which often includes personal information from a lot of people,” Tucker says. says Mr. Said. “If you can take a large language model and output that sensitive information, that would be a violation of that person’s privacy.”

What can you do with it?

Vasilev said the biggest challenge for state and local governments is safely incorporating large-scale language models into workflows. He also cautioned government agencies against a false sense of security, as while there are ways to mitigate attacks against AI, there is no foolproof way to protect it from misdirection.

“You can’t just say, ‘Okay, I’ll take this model, apply this technique, and be done.’ What you need to do is continually monitor, evaluate, and respond to issues as they arise. ” Vassilev said, acknowledging that researchers also need to develop better cybersecurity defenses. “In the meantime, you all need to be aware and aware of all of these things and continue to monitor them.”

Tucker, who helps technology companies find these kinds of vulnerabilities in their software, says there are some common sense ways to protect against AI security threats, such as prohibiting access to sensitive data. He claimed that there was.

“Do not connect to systems that have access to sensitive data such as social security numbers or other personal information,” Tucker said. “If a government agency wants to help its employees work more efficiently using AI, such as ChatGPT or similar services, [training] Sensitive data. Also, do not connect it to any system that allows access to its data. ”

But Tucker also expressed optimism, predicting that AI security features will become more commonplace, similar to the widespread use of two-factor authentication.

“Many people use websites and [software-as-a-service] I will apply,” he said. “I think AI security will be integrated through the technology stack of traditional security, then cloud security, and then he SaaS security.”


Written by Sophia Fox-Sowell

Sophia Fox Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She previously served as a multimedia producer for CNET, where she focused on private sector innovation in food production, climate change, and space through podcasts and video content. She earned a bachelor’s degree in anthropology from Wagner College and a master’s degree in media innovation from Northeastern University.