Why government must demystify AI
- By Matt Leonard
- Oct 23, 2018
As agencies integrate artificial intelligence into government applications, they must ensure citizens understand how the technology works to deliver services, said Sunmin Kim, the technology policy advisor for Sen. Brian Schatz (D-Hawaii).
Schatz recently introduced the AI in Government Act of 2018, which would authorize an emerging technology policy lab within the General Services Administration to ensure that the use of AI and emerging technologies by the federal government is in the public interest.
“As we move toward more government services being automated with AI, I think it's equally as important that we start building a body of evidence and scholarship, so that we can understand better how these algorithms are working and preempt and address some of the privacy and civil liberties issues that might come up,” Kim said at an Oct. 23 event hosted by NVIDIA.
Michael Garris, the founder and chair of the AI community of interest at the National Institute of Standards and Technology, said that since AI is based on probabilities, it’s important to consider its accuracy and potential biases before implementation.
“We’re talking today about the government directing and distributing citizen benefits and services [with AI], and this requires the highest bar of assurance that AI-driven systems will be and are reliable, safe, secure, privacy preserving and -- very important -- unbiased and not discriminatory,” he said.
NIST is currently working to understand the best ways to measure the trustworthiness of AI and then using those testing methods to help develop standards for the industry.
“We have to understand why AI systems do what they do; otherwise, humans won’t necessarily trust the robots, the robots won't trust the humans, and the robots won’t trust each other,” Lockheed Martin CTO Keoki Jackson said. “So that’s a really critical area of research that America should be leading in.”
Matt Leonard is a former reporter for GCN.