human machine interaction

Why government must demystify AI

As agencies integrate artificial intelligence into government applications, they must ensure citizens understand how the technology works to deliver services, said Sunmin Kim, the technology policy advisor for Sen. Brian Schatz (D-Hawaii).

MORE INFO

Congress takes first steps toward regulating artificial intelligence

As Congress considers whether to make laws governing how AI systems function in society, legislators should take care not to step in too soon and create regulatory hurdles for technologies that are still developing. Read more.

Looking for AI without the bias

The shortcomings of today’s artificial intelligence technologies are often the result of our own biases, a Rand report says. Read more.

Is that algorithm safe to use?

A new resources aims to help local governments better assess the potential risks of decision-making algorithms. Read more.

Holding algorithms accountable

A new report offers some measures for ensuring that algorithms behave as expected and that operators can identify and rectify harmful outcomes. Read more.

What’s government’s role in AI?

Senators explore government's role as both an end user and enabler of artificial intelligence. Read more.

Schatz recently introduced the AI in Government Act of 2018, which would authorize an emerging technology policy lab within the General Services Administration to ensure that the use of AI and emerging technologies by the federal government is in the public interest.

“As we move toward more government services being automated with AI, I think it's equally as important that we start building a body of evidence and scholarship, so that we can understand better how these algorithms are working and preempt and address some of the privacy and civil liberties issues that might come up,” Kim said at an Oct. 23 event hosted by NVIDIA.

Michael Garris, the founder and chair of the AI community of interest at the National Institute of Standards and Technology, said that since AI is based on probabilities, it’s important to consider its accuracy and potential biases before implementation.

“We’re talking today about the government directing and distributing citizen benefits and services [with AI], and this requires the highest bar of assurance that AI-driven systems will be and are reliable, safe, secure, privacy preserving and -- very important -- unbiased and not discriminatory,” he said.

NIST is currently working to understand the best ways to measure the trustworthiness of AI and  then using those testing methods to help develop standards for the industry.

“We have to understand why AI systems do what they do; otherwise, humans won’t necessarily trust the robots, the robots won't trust the humans, and the robots won’t trust each other,” Lockheed Martin CTO Keoki Jackson said. “So that’s a really critical area of research that America should be leading in.”

About the Author

Matt Leonard is a reporter/producer at GCN.

Before joining GCN, Leonard worked as a local reporter for The Smithfield Times in southeastern Virginia. In his time there he wrote about town council meetings, local crime and what to do if a beaver dam floods your back yard. Over the last few years, he has spent time at The Commonwealth Times, The Denver Post and WTVR-CBS 6. He is a graduate of Virginia Commonwealth University, where he received the faculty award for print and online journalism.

Leonard can be contacted at mleonard@gcn.com or follow him on Twitter @Matt_Lnrd.

Click here for previous articles by Leonard.


Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.