Hardening algorithms against adversarial AI

 

Connecting state and local government leaders

How can developers secure artificial intelligence applications when the underlying data is vulnerable to hackers?

The video shows hands rotating a 3D printed turtle for an image classification system, and the results are what one might expect: terrapin, mud turtle, loggerhead. But then a new turtle with a different texture is presented. This time the results are more surprising: the algorithm consistently classifies the image as a rifle, not a turtle.

This demonstration was part of an experiment conducted last year by MIT's  Computer Science and Artificial Intelligence Lab. Anish Athalye, a PhD candidate at MIT and author of a paper based on this research, said at the time that these results have concerning implications for the technology underlying many of the advancements being made in AI.

“Our robust adversarial examples, of which the rifle/turtle is one demonstration, show that adversarial examples can be made to reliably fool ML models in the physical world, contributing to evidence indicating that real-world machine learning systems may be at risk.” Athalye told GCN.

And this is not the only example of this kind of adversarial AI being demonstrated in a laboratory setting.

In 2016, researchers at Carnegie Mellon University showed how a pair of specially designed glasses could trick facial recognition. More recently, Google researchers created a sticker that could trick image recognition systems into classifying any picture where the sticker appeared as a toaster.

These examples all raise a serious question: How can developers secure applications that rely on machine learning or other techniques that learn from -- and make decisions based on -- historical data? Multiple experts told GCN there are not many best practices at this point for securing AI, but it is now the focus of significant research.

Lynne Parker, the assistant director of artificial intelligence in the White House Office of Science and Technology Policy, spoke about these challenges and said it is especially a concern for leaders at the Department of Defense.

“They want to have typically provable systems -- control systems -- but … the current approach to proving that systems are accurate does not apply to systems that learn,” Parker told GCN.

Anton Chuvakin, a security and risk management analyst at Gartner, echoed this assessment, saying AI systems lack the transparency that make it possible to trust their output.

“How do I know the system has not been corrupted? How do I know it has not been affected by an attacker?” Chuvakin asked. “There is no real clear best practice" for confirming why a system is providing a particular response.

These are concerns that are top of mind for people beginning to put AI systems in place, according to a recent survey by Deloitte of 1,100 IT executives who are currently working with the technology. It found security to be respondents' top concern. 

Jeff Loucks, and author of the report and the executive director of Deloitte’s Center for Technology, Media and Telecommunications, said this concern can be broken down to specific areas:

  • The ability of hackers to steal data used to train the algorithms.
  • The manipulation of data to provide incorrect results.
  • The use of AI to impersonate authorized users to access a network.
  • The ability of AI to automate cyberattacks.

The survey found these concerns are causing some execs to push pause on their AI projects, some of which may have started as pilots that did not incorporate cybersecurity from the beginning of the process.

Government has been hesitant to adopt AI due to these concerns, Parker said. It is one research area the Pentagon's recently created Joint Artificial Intelligence Center plans to look at, she said, to  “get at the issue of how do we leverage the AI advancements that have been made across all of DOD" and turn them into useful applications.

“This is a problem,” she said. “It is a research challenge because of the nature of how these AI systems work.”

The problem isn’t just making sure an algorithm can tell a turtle from a gun. Many AI systems are based on proprietary or sensitive training data. The ability to  understand an algorithm’s underlying data is a big concern and challenge in the current AI space, according to Josh Elliot, the director of artificial intelligence at Booz Allen Hamilton.

“If you think about the ability to reverse engineer an algorithm … you’re effectively stealing that application and you’re displacing the competitive advantage that company [that developed it] may have in the marketplace," Elliot said.

Elliot’s colleague, Greg McCullough, who leads the development of cybersecurity tools that leverage AI for Booz Allen Hamilton, said the company has used these reverse engineering techniques to understand the dynamic domain generation algorithms that sophisticated hackers use. DGAs generate domains for malware to use when penetrating a network, dynamically changing the domain to allow it to slip past traditional roadblocks like firewalls.

McCullough said Booz Allen has been successful  in using convolutional neural networks trained on domains generated by DGAs to predict what other domains might look like – thus reversing-engineering the DGA.

But these techniques for enhancing security can also be used by bad actors for an attack -- training an AI application to find the kind of attacks a target’s defenses don’t catch, he said.

“Nation-states have invested heavily in this area over the last few years, and now it's been commoditized and it's really causing quite a problem for every commercial organization that we’ve seen so far,” Elliot said.

What can agencies do about these concerns, whether is adversarial inputs or getting at the underlying data? Unfortunately, there isn’t much guidance.

“Some of that ability to provide adversarial inputs to neural nets is sort of endemic to the field,” Michael Barone, a lead cyber architect at Lockheed Martin, told GCN. “It is difficult to impossible at this state of the technology to say that it is [possible] to provide an impenetrable neural network.”

There is, however, some general advice. One step agencies can take, specifically with image recognition, is training an AI model on adversarial inputs that have been successful in tricking it, teaching it over time to avoid these mistakes -- for example, labeling the image of the differently textured turtle as a turtle to ensure a correct response.

“Of course you can see how that flywheel could get out of control trying to chase all those images and figure out how all of them could permute to create adversarial inputs," Barone said. "It's not necessarily something you want to follow all the way down the string, but that is an approach that has been used in some very high-level cases.”

There is also the consideration of unsupervised versus supervised algorithm. Supervised algorithms use organized datasets to produce a desired result. With unsupervised algorithms, though, there is no desired result; the algorithm works on its own to find patterns in the underlying data. This results in unsupervised algorithms being less likely to be tricked because the results aren’t based on this curated set of data being provided to the algorithm. 

“So, with one possible approach to adversarial attacks reduced -- not entirely removed -- the algorithm becomes less likely to be tricked,” he said.

Another possibility is keeping a human in the loop to check on any decisions that AI suggests.

“Unfortunately,” said Gartner's Chuvakin, "this is quite unrealistic, especially if you have a large volume of data -- like say for speech recognition or image recognition. But unfortunately I’m not aware of any other way to put … controls in there.”

For the long term, Chuvakin suggested investing in research to make AI more transparent and understandable, which would result in the users of AI applications having greater trust in the output.

This is an area where the Defense Advanced Research Projects Agency is already investing with its Explainable Artificial Intelligence program. DARPA wants to create machine-learning techniques that are explainable and easier for humans to understand -- with a desire that this would then lead to more trust of AI applications.

Currently, machine learning uses neural networks to determine if a turtle is a sea creature or a rifle. These neural networks are making decisions based on training data of other turtles, or whatever the given application might be, and are then able to look a new image and determine at a given probability if it matches that training data. The problem is, however, users don’t know how the neural networks are reaching that decision.

DARPA wants a more explainable model that could look a new picture and determine it is looking at a turtle and provide reasoning like the fact that it sees a shell, flippers and the color green.

“At this point, if you have some sort of AI that detects threats it may not be based on these explainable principals,” Chuvakin said.

While there is currently a dearth of standards for AI security, the National Institute of Standards and Technology is focusing on how to measure and enhance the security and trustworthiness of AI systems through its newly created AI program.

Kevin Stine, the chief of the applied cybersecurity division at NIST, spoke about the effort at a recent Information Security and Privacy Advisory Board meeting, saying it would look at creating a common taxonomy for talking about AI systems and study the best ways to secure AI architectures.

“We’re excited to have that work unfold over the next several of months and beyond,” he told the board.

The effort is in its very early stages and will work in close partnership with the National Cybersecurity Center of Excellence at NIST. The program's final output is still to be determined, but it could result in a NIST publication, Stine said in a later interview with GCN.

“It’s a combination of kind of foundational research activities -- looking into the potential uses and opportunities to use AI for security -- but [also] how do you secure AI, so security around the algorithms and the decision engines and those types of things,” he said.

Having a standards body like NIST look at this space is a sorely needed step, experts said.

“I’m glad to hear NIST is starting to talk about this because I think there are some standards that need to be put in place for securing these AI systems,” Elliot said.

Chuvakin agreed NIST’s voice would be helpful, but he said it would be a while before it published any tangible guidance, pointing to the agency's internet-of-things program, which is just beginning to put out draft publications.

Getting to a point where agencies are comfortable with the security of their AI application, Chuvakin suggested, might require one simple ingredient: time.

“To have best practices you have to practice,” he said. “We haven’t really practiced this. So I think anyone who claims to have AI security best practices is lying.”

NEXT STORY: Why email phishing persists

X
This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising and analytics partners. Learn More / Do Not Sell My Personal Information
Accept Cookies
X
Cookie Preferences Cookie List

Do Not Sell My Personal Information

When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.

Allow All Cookies

Manage Consent Preferences

Strictly Necessary Cookies - Always Active

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data, Targeting & Social Media Cookies

Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link

If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.

Targeting cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.

Social media cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit. If you do not allow these cookies you may not be able to use or see these sharing tools.

If you want to opt out of all of our lead reports and lists, please submit a privacy request at our Do Not Sell page.

Save Settings
Cookie Preferences Cookie List

Cookie List

A cookie is a small piece of data (text file) that a website – when visited by a user – asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies – which are cookies from a domain different than the domain of the website you are visiting – for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.