human machine interaction

Why government must demystify AI

As agencies integrate artificial intelligence into government applications, they must ensure citizens understand how the technology works to deliver services, said Sunmin Kim, the technology policy advisor for Sen. Brian Schatz (D-Hawaii).


Congress takes first steps toward regulating artificial intelligence

As Congress considers whether to make laws governing how AI systems function in society, legislators should take care not to step in too soon and create regulatory hurdles for technologies that are still developing. Read more.

Looking for AI without the bias

The shortcomings of today’s artificial intelligence technologies are often the result of our own biases, a Rand report says. Read more.

Is that algorithm safe to use?

A new resources aims to help local governments better assess the potential risks of decision-making algorithms. Read more.

Holding algorithms accountable

A new report offers some measures for ensuring that algorithms behave as expected and that operators can identify and rectify harmful outcomes. Read more.

What’s government’s role in AI?

Senators explore government's role as both an end user and enabler of artificial intelligence. Read more.

Schatz recently introduced the AI in Government Act of 2018, which would authorize an emerging technology policy lab within the General Services Administration to ensure that the use of AI and emerging technologies by the federal government is in the public interest.

“As we move toward more government services being automated with AI, I think it's equally as important that we start building a body of evidence and scholarship, so that we can understand better how these algorithms are working and preempt and address some of the privacy and civil liberties issues that might come up,” Kim said at an Oct. 23 event hosted by NVIDIA.

Michael Garris, the founder and chair of the AI community of interest at the National Institute of Standards and Technology, said that since AI is based on probabilities, it’s important to consider its accuracy and potential biases before implementation.

“We’re talking today about the government directing and distributing citizen benefits and services [with AI], and this requires the highest bar of assurance that AI-driven systems will be and are reliable, safe, secure, privacy preserving and -- very important -- unbiased and not discriminatory,” he said.

NIST is currently working to understand the best ways to measure the trustworthiness of AI and  then using those testing methods to help develop standards for the industry.

“We have to understand why AI systems do what they do; otherwise, humans won’t necessarily trust the robots, the robots won't trust the humans, and the robots won’t trust each other,” Lockheed Martin CTO Keoki Jackson said. “So that’s a really critical area of research that America should be leading in.”

About the Author

Matt Leonard is a former reporter for GCN.


  • Russia prying into state, local networks

    A Russian state-sponsored advanced persistent threat actor targeting state, local, territorial and tribal government networks exfiltrated data from at least two victims.

  • Marines on patrol (US Marines)

    Using AVs to tell friend from foe

    The Defense Advanced Research Projects Agency is looking for ways autonomous vehicles can make it easier for commanders to detect and track threats among civilians in complex urban environments without escalating tensions.

Stay Connected