IRS explores AI applications
- By Sara Friedman
- Oct 12, 2017
During the 2016 tax season, the IRS received over 75 million phone calls and 500 billion website visits from taxpayers. To make it easier for the tax agency to answer questions and deliver services, it is investigating with the use of chatbots and virtual assistants.
“We are in the exploratory phase of testing and hopefully deploying chatbots to facilitate a better web experience for our taxpayer base,” Jeff Butler, director of data management at the IRS, told GCN after a panel at the Oct. 11 ACT-IAC Artificial Intelligence Forum. “Chatbots are the first generation, and the next iteration may be intelligent voice agents like Alexa or Cortana.”
The IRS currently uses a phone tree to guide callers to specific resources, but it does have some limitations.
“Phone trees are essentially automated voice systems where you start with a node and go down an expanded tree -- but you can’t reverse,” Butler said. “Chatbots or voice agents give the taxpayer the ability to have a more flexible [experience] that provides better customer support.”
The IRS is also exploring using chatbots internally to help with speech translation.
As part of trade agreements, the IRS gets “a lot of information” from its sister agencies in Germany, Italy, Spain and Japan, and the agency wants a cost-effective way to translate the information from tax return attachments and other documents, Butler said. Work is being done to look into “comprehensive uses” of speech recognition software that provides language translation.
Efforts are also underway to use machine learning to improve agency business practices related to identity, refunds and fraud. Butler sees the biggest potential value from AI for managing the effects of identity theft.
When an identity is stolen, the current processes are “manual and human intensive,” he said. AI and data science can be used to make the “process easier for households or victims to get through and move on with their lives.”
However, Jeff Alstott, program manager at the Intelligence Advanced Research Projects Activity, cautioned agency officials to consider how scaling the volume of the data could impact the results from AI and machine learning applications. Alstott used the example of board game Go and the computer version of the game, AlphaGo, developed by Google DeepMind in 2015.
“When you increase the size of the board, human players don’t care because they implicitly understand it and don’t depend on the size of the board, ” Alstott said. AlphaGo, on the other hand, "doesn’t do very well because it was trained on a Go board of a particular size.”
Sara Friedman is a reporter/producer for GCN, covering cloud, cybersecurity and a wide range of other public-sector IT topics.
Before joining GCN, Friedman was a reporter for Gambling Compliance, where she covered state issues related to casinos, lotteries and fantasy sports. She has also written for Communications Daily and Washington Internet Daily on state telecom and cloud computing. Friedman is a graduate of Ithaca College, where she studied journalism, politics and international communications.
Friedman can be contacted at firstname.lastname@example.org or follow her on Twitter @SaraEFriedman.
Click here for previous articles by Friedman.