Building ethics into AI
- By Matt Leonard
- Jun 13, 2018
Although Google will not renew its contract to supply artificial intelligence technology to the Defense Department to analyze drone video footage, the federal government will have no trouble procuring for AI services, experts predicted at a June 12 Brookings Institution event.
When asked whether the federal government could develop AI in-house, Susan Hennessey, a Brookings fellow and executive editor of Lawfare, said agencies likely won’t have to resort to that.
“This is a cynical answer, but there are billions and billions of dollars here, and I’m quite confident that the American system will produce a willing participant wanting to design these things,” Hennessey said. But these contractors will likely differ in these “character and values” from a company like Google. By deciding not to renew this contract, Google is giving up its ability to inject its ethics into the technology, she said.
Scott Tousley, the deputy director of the Homeland Security Advanced Research Projects Agency's Cyber Security Division, said he viewed Google’s decision as a positive development.
“I think what Google has done is actually a very good thing, which is to bring the discussion to the public" about where and when limits on the technology should be set, Tousley said. “Their position may stay the same, it may change over time -- companies shift and adjust.… But their point is a really good one: You can’t hide from the ethical questions.”
AI testing and pilot programs will play an important part in better understanding these ethical concerns, he said.
Regardless of who is developing the technology, national security agencies could reap significant benefits from AI that can find and recognize patterns significantly more quickly than a human.
James Baker, a visiting fellow at Brookings and former general counsel for the FBI, said AI will be key for leveraging the existing data within government agencies.
AI can help agencies better understand the data they have, finding patterns and spotting relationships, Baker said. “The FBI has a lot of investigative holdings from a variety of different sources, including from electronic surveillance, and I think utilizing AI to understand and analyze that [data] potentially could have huge benefits for us," he said.
However, he added that the use of AI "raises a number of privacy [questions] and issues with respect to the constitution and making sure that’s what is done is with strict adherence to the constitutional laws of the United States.”
AI will also play an increasing role in the future of cybersecurity for both good actors and bad actors, Tousley said.
Machine learning systems have already begun to show their worth in this area, Hennessey added. There are machine learning systems that "go out, identify vulnerabilities in other systems, learn how to exploit them, come back to their own systems, identify that same vulnerability in themselves and then patch it,” she said.
It will be critical to understand how these systems can be tricked and the danger that comes with collecting massive amounts of data, panelists said. Researchers have shown it is easy to make minor changes to a stop sign that would be unnoticeable to a human, for example, but would render the sign unreadable to a computer vision system, Hennessey said.
The growth of mass data collection also must coincide with conversations on cybersecurity because large amounts of data can be attractive to hackers, Baker said.
“As we go forward with 5G and [the internet of things] and as more and more data is collected, that trove of information is going to be even richer and even more desirable for an adversary to obtain,” Baker said.
Since AI is in the early stages of mass deployment, however, any vision of the future is simply speculation at this point, Baker said: “We don’t really know [how AI will be used in a national security setting], and that’s one of the things that concerns me the most.”
Matt Leonard is a former reporter for GCN.