Automated systems that teach themselves can be manipulated by adversaries.
As government agencies are beginning to turn over security to automated systems that can teach themselves, the idea that hackers can sneakily influence those systems is becoming the latest (and perhaps the greatest) new concern for cybersecurity professionals.
Adversarial machine learning is a research field that “lies at the intersection of machine learning and computer security,” according to Wikipedia. “It aims to enable the safe adoption of machine-learning techniques in adversarial settings like spam filtering, malware detection and biometric recognition.” According to Nicolas Papernot, Google PhD Fellow in Security at Pennsylvania State University, AML seeks to better understand the behavior of machine-learning algorithms once they are deployed in adversarial settings -- that is, "any setting where the adversary has an incentive, may it be financial or of some other nature, to force the machine-learning algorithms to misbehave.”
“Unfortunately, current machine-learning models have a large attack surface as they were designed and trained to have good average performance, but not necessarily worst-case performance, which is typically what is sought after from a security perspective,” Papernot said. As such, they are vulnerable to generic attacks, which often can be conducted regardless of the machine-learning model type or the task being solved.
Yevgeniy Vorobeychik, professor of electrical engineering and computer science at Vanderbilt University, pointed out that while some government agencies -- like the Defense Department and its research arm, DARPA -- are “reaching a level of sophistication that we [academics] do not have,” AML is just beginning to emerge in this sector. It is being “seriously considered” by many governments and adjunct groups like metropolitan and national law enforcement to forecast criminal activity, for example.
In the public sector, machine learning can be used in many applications, ranging from “techniques for defending against cyber attacks; for analyzing scientific data, such as astronomy observations or data from large scale experiments conducted by the Department of Energy; for biological and medical research; or for building crime-prediction models, used in parole and sentencing decisions,” according to Tudor A. Dumitras, assistant professor at the University of Maryland at College Park. These systems are all susceptible to AML attacks, he added.
To illustrate the problem, Dumitras pointed to cyber defense systems, which must classify artifacts or activities -- such as executable programs, network traffic or emails -- as benign or malicious. In order to do this, he said, machine-learning algorithms start from a few known benign and known malicious examples, and, using them as a starting point, the algorithms learn models of malicious activity without requiring a predetermined description of these activities.
“An intelligent adversary can subvert these techniques and cause them to produce the wrong outputs,” he said. Broadly, Dumitras said that there are three ways adversaries can do this:
- Attack the trained model by crafting examples that cause the machine-learning algorithm to mislabel an instance or to learn a skewed model.
- Attack the implementation by finding exploitable bugs in the code.
- Exploit the fact that a machine-learning model is often a black box to the users.
“As a consequence, users may not realize that the model has a blind spot or that it is based on artifacts of the data rather than meaningful features,” Dumitras said, “as machine-learning models often produce malicious or benign determinations, but do not outline the reasoning behind these conclusions.”
AML is becoming important in the public sector and law enforcement, because computer scientists “have reached sufficient maturity in machine-learning research for machine-learning models to perform very well on many challenging tasks, sometimes superseding human performance,” according to Papernot. “Hence, machine learning is becoming pervasive in many applications, and is increasingly a candidate for innovative cybersecurity solutions.” However, Papernot said that as long as vulnerabilities -- such as the ones identified with adversarial examples -- are not fully understood, the predictions made by machine-learning models will remain difficult to trust.
A large number of specific attacks against machine learning have been discovered over the past decade, Dumitras said. “While the problem that the attacker must solve is theoretically hard, it is becoming clear that it is possible to find practical attacks against most practical systems,” he said. For example, hackers already know how to evade machine learning-based detectors; how to poison the training phase so that the model produces the outputs they want; how to steal a proprietary machine-learning model by querying it repeatedly; and how to invert a model to learn private information about the users it is based on.
At the same time, defending against these attacks is largely an open question.“There are only a few known defenses,” Dumitras said, “which generally work only for specific attacks and lose their effectiveness when the adversary changes strategies.”
For example, he pointed to the spread of “fake news,” which can erode trust in the government. The proliferation of fake news -- especially on social media sites like Facebook, Twitter or Google -- is amplified by users clicking on, commenting on, or liking these fraudulent stories. This behavior constitutes “a form of poisoning, where the recommendation algorithms operate on unreliable inputs, and they are likely to promote more fake news,” he said.
The embrace of AML has created “a very asymmetric warfare for the good guys… the bad guys have [so far] had the benefits on their side,” said Evan Wright, principal data scientist for the threat intelligence company Anomali. “The good guys are forced to block everything.”
However, the “good guys” are not totally out of luck. By proactively benchmarking the vulnerabilities of their machine-learning algorithms, government agencies and law enforcement groups can take a big first step to mapping their attack surface, according to Papernot. He recommended agencies start with software like cleverhans, a Python library to benchmark machine-learning systems' vulnerability to adversarial examples.
“Once a machine-learning model is deployed and adversaries can interact with it -- even in a limited fashion such as an API -- it should be assumed that motivated adversaries are capable of reverse-engineering the model and potentially the data it was trained on,” Papernot said. He advised government agencies and law enforcement, therefore, to closely monitor the privacy costs associated with training the model.
Vorobeychik recommended that public sector IT professionals get ahead of the problem by considering all of these potential vulnerabilities and conducting red-team exercises for any machine-learning algorithms they might put in place. “Red-team exercises would go a long way… to testing these more automated tools,” he said.
While systematic solutions often “require making unrealistic assumptions about the adversary,” Dumitras said, it is possible to prevent AML attacks in specific cases, but significant new research is needed for developing effective defenses. For example, he said that if the adversary “cannot query the machine-learning system, does not have access to the training set, does not know the design of the system or the features it uses, and does not have access to the implementation, it would be challenging to craft adversarial samples.”
However, Dumitras added, these assumptions are usually unrealistic. Since many government systems rely on open-source machine learning libraries, the adversary is free to examine the code for potential exploitable bugs. “It may be tempting, in this case, to turn to ‘security through obscurity,’ by hiding as much information as possible about how the system operates,” he said. “But recent black-box attacks suggest that it is possible to craft effective adversarial samples with minimal information about the system.”
Sanitizing input data can also be effective, as it may reveal suspicious data before it is provided to the machine-learning algorithm, but manual sanitization cannot be done at scale, he said. “Ultimately, more basic research is needed to develop effective defenses against adversarial machine learning attacks.”
NEXT STORY: Georgia gets serious about HTTPS