House dives into artificial intelligence
- By Matt Leonard
- Feb 15, 2018
Legislators are working to get a grip on the thorny issue of artificial intelligence by conducting a series of congressional hearings to guide government understanding and adoption of the technology.
The hearings by the House Oversight and Government Reform's Subcommittee on Information Technology are "an opportunity to leverage technology to make us more efficient,” Rep. Will Hurd (R-Texas) said in a video produced by the Committee. “I want to get to a point where we can be making decisions within the government on where to spend dollars or resources based on the analysis of large volumes of data.”
The first session, held Feb. 14, laid the foundation for the later meetings, with the four witnesses discussing the latest advances in the area, AI’s many benefits and its potential pitfalls.
Some AI-based sentencing and facial recognition applications have received criticism because they tend to be less accurate when used with minority populations.
Charles Isbell, a professor and executive associate dean for the College of Computing at Georgia Institute of Technology, said he became aware of this bias when he was a student at the MIT AI Lab in the 1990s where some of the early work in facial recognition was being conducted.
“A good friend of mine came up to me at one point and said I was breaking all of their facial recognition software,” Isbell testified. Because the software was trained using images of light-skinned individuals, it had trouble recognizing an African American. “And so they had to come up with ways around the problem of me and … created better algorithms that didn’t depend upon the assumptions they were making from the data they had,” he said.
All of the witnesses agreed that biased data will result in biased applications. What to do about it, however, led to less consensus.
Creating transparency for these algorithms is one option, Isbell said. People should be able to know what led a program to make a particular decision.
Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, disagreed with this take, suggesting that accuracy may be more important than openness. If one medical application is open and 80 percent accurate, but another is closed and 99 percent accurate, he asked, which would you rather use for medical care?
There was also disagreement over whether consumers should be aware that products they buy use AI. Etzioni said such labeling would make sense, but Amir Khosrowshahi, the vice president and chief technology officer at Intel, disagreed, warning against excessive regulation.
“I would just be wary of unnecessary regulation or imposing regulation on a very young and rapidly moving field because I can see it could have some adverse consequences,” Khosrowshahi said before throwing Etzioni’s hypothetical question back his way: “Would you want something that is labeled and worse preforming or unlabeled and better performing?”
The upcoming March hearing in the series will address the government’s use of AI, but witnesses at this hearing discussed what government can do to advance AI and how it can take advantage of the technology.
Ian Buck, the vice president and general manager of accelerated computing at NVIDIA, said funding for AI research and agency adoption of the technology will be important, but the government can also play a major role in improving the technology.
“Data is the fuel that drives the AI engine,” Buck said. “The federal government has access to vast sources of information.”
Opening up datasets will give researchers more material to train more applications, he said. He pointed to the OPEN Government Data Act, which passed the House and Senate last year with bipartisan support, as a good starting point.
As for agency adoption, Buck suggested meaningful, simple pilots like the Defense Department's Project Maven that is using AI to process reconnaissance video and images from drones so people don’t have to stare at monitors for eight hours a day, he said.
To help organizations get started with AI, there are open source resources and education materials, especially for image recognition, which has been around for years now, he said.
It’s at the point where AI is simply becoming another tool in the technology toolbox, Isbell said.
The problems arise as AI contributing to decision-making processes. The biggest challenge, Isbell said, will be understanding how humans do their jobs and clarifying where computers can help.
Matt Leonard is a reporter/producer at GCN.
Before joining GCN, Leonard worked as a local reporter for The Smithfield Times in southeastern Virginia. In his time there he wrote about town council meetings, local crime and what to do if a beaver dam floods your back yard. Over the last few years, he has spent time at The Commonwealth Times, The Denver Post and WTVR-CBS 6. He is a graduate of Virginia Commonwealth University, where he received the faculty award for print and online journalism.
Leonard can be contacted at firstname.lastname@example.org or follow him on Twitter @Matt_Lnrd.
Click here for previous articles by Leonard.