artificial intelligence (Shutterstock image)

Agency officials, industry call for transparency in AI

With both the public and private sector racing to develop artificial intelligence- and machine learning-based tools and applications, some government execs are urging caution.

The Intelligence Advanced Research Projects Activity, which funds gaps in the research but also is interested in using the technology itself, wants to work with industry to improve the technology.

“There is a brittleness of most machine learning work that makes it too vulnerable to use,” IARPA Director Jason Matheny said at an Oct. 24 Bloomberg Government event.  “There needs to be more collaboration between government and the private sector to make this work.”

One way that IARPA is working further industry innovation is through open challenges.

“Our public prize challenges crowdsource [solutions to] machine-learning problems,” Matheny said.  “Right now, we have a Functional Map of the World challenge where we have dozens of groups working to figure out the function of a building with just [satellite] images.”

AI can advance science and technology similar to the way health sciences have enabled a longer life expectancy. 

Tiffany George, senior attorney in the Federal Trade Commission’s Division of Privacy and Identity Protection, urged companies to consider consumer safety implications when developing their products. Data protection and regulatory compliance in the AI realm are two issues that concern George.

Industry also is advocating collaboration with collaborate with government.  The Information Technology Industry Council issued a set of AI policy principles on Oct. 24 to guide future innovation.

The council, which goes by the acronym ITI, supports government funding for research and development projects that use AI, including cyber defense, data analytics, robotics, human augmentation and natural language processes. The guidelines call for more public-private partnerships to expedite R&D and prepare the future workforce.

ITI encourages responsible design and deployment, development of safe and controllable systems, use of robust and  unbiased data and creation of a framework for accountability.  Cybersecurity and privacy concerns still remain, but through cryptography and security standards, government can work with industry to establish trust in AI innovations.

ITI’s report is a call for “broader collaboration” and a “responsibility to make sure that AI is disseminated” in a safe manner, ITI President and CEO Dan Garfield said at the Bloomberg event. 

The full report can be found here.

About the Author

Sara Friedman is a reporter/producer for GCN, covering cloud, cybersecurity and a wide range of other public-sector IT topics.

Before joining GCN, Friedman was a reporter for Gambling Compliance, where she covered state issues related to casinos, lotteries and fantasy sports. She has also written for Communications Daily and Washington Internet Daily on state telecom and cloud computing. Friedman is a graduate of Ithaca College, where she studied journalism, politics and international communications.

Friedman can be contacted at or follow her on Twitter @SaraEFriedman.

Click here for previous articles by Friedman.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.