people in crowd identified by machine (MONOPOLY919/Shutterstock.com)

IARPA seeks to plug privacy holes in AI

Hackers are using adversarial techniques to corrupt artificial intelligence and machine learning models by tampering with their training data.

MORE INFO

Hardening algorithms against adversarial AI

How can developers secure artificial intelligence applications when the underlying data is vulnerable to hackers? Read more.

Unmasking AI-assisted cyber attacks

Researchers are developing algorithms that can detect when malware uses adversarial machine learning to attack networks and evade detection. Read more.

According to the Intelligence Advanced Research Projects Agency, recent research shows AI systems are vulnerable to exploits such as "reconstructing training data using only output predictions, revealing statistical distribution information of training datasets, and performing membership queries for a specific training data example."

To secure AI/ML systems, IARPA has issued a draft broad agency announcement for technologies that defend against privacy attacks that aim to reveal information about the individuals in the training dataset.

The goal of the IARPA's Secure, Assured, Intelligent Learning Systems (SAILS) program is to develop models that will not reveal sensitive information even when under attack by two kinds of exploits:  model inversion attacks, which can reconstruct training data from a model's predictions, or membership inference attacks that can uncover members of a specific training dataset.

SAILS will focus on these privacy attacks against models used for text-based applications as well as facial and voice recognition systems in cases where hackers have white-box access, or full of knowledge of the targeted AI/ML model, and when they have minimum knowledge of the system, or black box access.

IARPA said new techniques could be related to training procedures, model architectures or new pre- and post-processing procedures.

The 24-month program will have four six-month competition rounds, with participants delivering software containers including algorithms and application programming interfaces that provide defenses to protect sensitive training data and statistical information in AI/ML models.  SAILS is expected to start November 2019.

Comments on the draft BAA are due Jan. 31. More information is available here.

About the Author

Susan Miller is executive editor at GCN.

Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDG’s ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginia’s Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.

Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.

Connect with Susan at smiller@gcn.com or @sjaymiller.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.