AI and data (Andrey Suslov/Shutterstock.com)

Hardening algorithms against adversarial AI

The video shows hands rotating a 3D printed turtle for an image classification system, and the results are what one might expect: terrapin, mud turtle, loggerhead. But then a new turtle with a different texture is presented. This time the results are more surprising: the algorithm consistently classifies the image as a rifle, not a turtle.

This demonstration was part of an experiment conducted last year by MIT's  Computer Science and Artificial Intelligence Lab. Anish Athalye, a PhD candidate at MIT and author of a paper based on this research, said at the time that these results have concerning implications for the technology underlying many of the advancements being made in AI.

“Our robust adversarial examples, of which the rifle/turtle is one demonstration, show that adversarial examples can be made to reliably fool ML models in the physical world, contributing to evidence indicating that real-world machine learning systems may be at risk.” Athalye told GCN.

And this is not the only example of this kind of adversarial AI being demonstrated in a laboratory setting.

In 2016, researchers at Carnegie Mellon University showed how a pair of specially designed glasses could trick facial recognition. More recently, Google researchers created a sticker that could trick image recognition systems into classifying any picture where the sticker appeared as a toaster.

These examples all raise a serious question: How can developers secure applications that rely on machine learning or other techniques that learn from -- and make decisions based on -- historical data? Multiple experts told GCN there are not many best practices at this point for securing AI, but it is now the focus of significant research.

Lynne Parker, the assistant director of artificial intelligence in the White House Office of Science and Technology Policy, spoke about these challenges and said it is especially a concern for leaders at the Department of Defense.

“They want to have typically provable systems -- control systems -- but … the current approach to proving that systems are accurate does not apply to systems that learn,” Parker told GCN.

Anton Chuvakin, a security and risk management analyst at Gartner, echoed this assessment, saying AI systems lack the transparency that make it possible to trust their output.

“How do I know the system has not been corrupted? How do I know it has not been affected by an attacker?” Chuvakin asked. “There is no real clear best practice" for confirming why a system is providing a particular response.

These are concerns that are top of mind for people beginning to put AI systems in place, according to a recent survey by Deloitte of 1,100 IT executives who are currently working with the technology. It found security to be respondents' top concern. 

Jeff Loucks, and author of the report and the executive director of Deloitte’s Center for Technology, Media and Telecommunications, said this concern can be broken down to specific areas:

  • The ability of hackers to steal data used to train the algorithms.
  • The manipulation of data to provide incorrect results.
  • The use of AI to impersonate authorized users to access a network.
  • The ability of AI to automate cyberattacks.

The survey found these concerns are causing some execs to push pause on their AI projects, some of which may have started as pilots that did not incorporate cybersecurity from the beginning of the process.

Government has been hesitant to adopt AI due to these concerns, Parker said. It is one research area the Pentagon's recently created Joint Artificial Intelligence Center plans to look at, she said, to  “get at the issue of how do we leverage the AI advancements that have been made across all of DOD" and turn them into useful applications.

“This is a problem,” she said. “It is a research challenge because of the nature of how these AI systems work.”

The problem isn’t just making sure an algorithm can tell a turtle from a gun. Many AI systems are based on proprietary or sensitive training data. The ability to  understand an algorithm’s underlying data is a big concern and challenge in the current AI space, according to Josh Elliot, the director of artificial intelligence at Booz Allen Hamilton.

“If you think about the ability to reverse engineer an algorithm … you’re effectively stealing that application and you’re displacing the competitive advantage that company [that developed it] may have in the marketplace," Elliot said.

Elliot’s colleague, Greg McCullough, who leads the development of cybersecurity tools that leverage AI for Booz Allen Hamilton, said the company has used these reverse engineering techniques to understand the dynamic domain generation algorithms that sophisticated hackers use. DGAs generate domains for malware to use when penetrating a network, dynamically changing the domain to allow it to slip past traditional roadblocks like firewalls.

McCullough said Booz Allen has been successful  in using convolutional neural networks trained on domains generated by DGAs to predict what other domains might look like – thus reversing-engineering the DGA.

But these techniques for enhancing security can also be used by bad actors for an attack -- training an AI application to find the kind of attacks a target’s defenses don’t catch, he said.

“Nation-states have invested heavily in this area over the last few years, and now it's been commoditized and it's really causing quite a problem for every commercial organization that we’ve seen so far,” Elliot said.

What can agencies do about these concerns, whether is adversarial inputs or getting at the underlying data? Unfortunately, there isn’t much guidance.

“Some of that ability to provide adversarial inputs to neural nets is sort of endemic to the field,” Michael Barone, a lead cyber architect at Lockheed Martin, told GCN. “It is difficult to impossible at this state of the technology to say that it is [possible] to provide an impenetrable neural network.”

There is, however, some general advice. One step agencies can take, specifically with image recognition, is training an AI model on adversarial inputs that have been successful in tricking it, teaching it over time to avoid these mistakes -- for example, labeling the image of the differently textured turtle as a turtle to ensure a correct response.

“Of course you can see how that flywheel could get out of control trying to chase all those images and figure out how all of them could permute to create adversarial inputs," Barone said. "It's not necessarily something you want to follow all the way down the string, but that is an approach that has been used in some very high-level cases.”

There is also the consideration of unsupervised versus supervised algorithm. Supervised algorithms use organized datasets to produce a desired result. With unsupervised algorithms, though, there is no desired result; the algorithm works on its own to find patterns in the underlying data. This results in unsupervised algorithms being less likely to be tricked because the results aren’t based on this curated set of data being provided to the algorithm. 

“So, with one possible approach to adversarial attacks reduced -- not entirely removed -- the algorithm becomes less likely to be tricked,” he said.

Another possibility is keeping a human in the loop to check on any decisions that AI suggests.

“Unfortunately,” said Gartner's Chuvakin, "this is quite unrealistic, especially if you have a large volume of data -- like say for speech recognition or image recognition. But unfortunately I’m not aware of any other way to put … controls in there.”

For the long term, Chuvakin suggested investing in research to make AI more transparent and understandable, which would result in the users of AI applications having greater trust in the output.

This is an area where the Defense Advanced Research Projects Agency is already investing with its Explainable Artificial Intelligence program. DARPA wants to create machine-learning techniques that are explainable and easier for humans to understand -- with a desire that this would then lead to more trust of AI applications.

Currently, machine learning uses neural networks to determine if a turtle is a sea creature or a rifle. These neural networks are making decisions based on training data of other turtles, or whatever the given application might be, and are then able to look a new image and determine at a given probability if it matches that training data. The problem is, however, users don’t know how the neural networks are reaching that decision.

DARPA wants a more explainable model that could look a new picture and determine it is looking at a turtle and provide reasoning like the fact that it sees a shell, flippers and the color green.

“At this point, if you have some sort of AI that detects threats it may not be based on these explainable principals,” Chuvakin said.

While there is currently a dearth of standards for AI security, the National Institute of Standards and Technology is focusing on how to measure and enhance the security and trustworthiness of AI systems through its newly created AI program.

Kevin Stine, the chief of the applied cybersecurity division at NIST, spoke about the effort at a recent Information Security and Privacy Advisory Board meeting, saying it would look at creating a common taxonomy for talking about AI systems and study the best ways to secure AI architectures.

“We’re excited to have that work unfold over the next several of months and beyond,” he told the board.

The effort is in its very early stages and will work in close partnership with the National Cybersecurity Center of Excellence at NIST. The program's final output is still to be determined, but it could result in a NIST publication, Stine said in a later interview with GCN.

“It’s a combination of kind of foundational research activities -- looking into the potential uses and opportunities to use AI for security -- but [also] how do you secure AI, so security around the algorithms and the decision engines and those types of things,” he said.

Having a standards body like NIST look at this space is a sorely needed step, experts said.

“I’m glad to hear NIST is starting to talk about this because I think there are some standards that need to be put in place for securing these AI systems,” Elliot said.

Chuvakin agreed NIST’s voice would be helpful, but he said it would be a while before it published any tangible guidance, pointing to the agency's internet-of-things program, which is just beginning to put out draft publications.

Getting to a point where agencies are comfortable with the security of their AI application, Chuvakin suggested, might require one simple ingredient: time.

“To have best practices you have to practice,” he said. “We haven’t really practiced this. So I think anyone who claims to have AI security best practices is lying.”

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.