IVAs that tap human intelligence bring additional insights to a citizen's query and improve overall experience.
It’s no secret that government is hot on artificial intelligence as a tool to improve the citizen experience – especially given the president’s recent signing of the American AI Initiative, which focuses more federal resources on AI development to “improve the quality of life for the American people.”
Already, government has invested in AI-enabled technologies like chatbots and automated voice systems in an effort to transform digital citizen services for the better -- enabling citizens to complete self-serve transactions like secure authentication, registration and enrollment, check on appeal status and benefits eligibility or verify and make payments by phone, text or web.
There are some pioneering examples, like Emma, the chatbot built by U.S. Citizenship and Immigration Services, which has processed about 10.5 million requests from 3.3 million unique visitors that were typed in both English and Spanish. With an 89% success rate answering questions posed in Spanish and a 91% success rate for English, Emma has been touted as a highly successful federal use case leveraging AI to deliver better self-service capabilities to citizens.
Where intelligent systems fall short
Despite the headway that government is making in applying AI, many agencies are still encountering challenges, especially around citizen services. This is largely because not all AI technologies are the same.
Many of the solutions in use have been instrumental in keeping citizens engaged and answering simple FAQs, but they are often incapable of understanding a citizen's true intent. When asked more complex questions, these technologies can be stumped, making it impossible to move the caller along to the next appropriate channel without the help of a live agent. The key problem is that most of these solutions are built on speech recognition technology and natural language processing (NLP), which can often fail to understand what a citizen is actually asking.
Finding the sweet spot between AI and human intelligence
Agencies need a citizen experience solution at the optimal intersection of AI and human intelligence, where the technology doesn’t just recognize commands but also the caller's or texter's intent. This is where an intelligent virtual assistant can help. By tapping human intelligence, an IVA adds an additional source of insight to better understand the intent of the call and improve citizen experience.
When the IVA’s speech recognition technology stack has a low confidence score and gets tripped up, it sends the brief bit of audio to a human who quickly directs the IVA on the correct action to take.
For example, citizens looking to obtain passports may encounter a chatbot that asks how many people are applying for a passport. If the traveler responds, "My wife and I need new passports,” an NLP-only or speech recognition-based system would not understand that the caller means two people. The bot would likely respond with something like, “I do not understand your response,” frustrating the citizen and requiring the call be transferred to a live agent. An IVA, however, uses both NLP and human intelligence to interpret the context of the response and simply direct the IVA to account for two people.
This all happens instantaneously, behind the scenes, without the citizen being aware that the IVA even needed human assistance – making the interaction appear seamless. The IVA works in concert with a human-intelligence stack to seamlessly fill-in information anytime it doesn’t understand a response and enables a true self-service experience.
The human interaction also enables machine learning so that the IVA can adjust its algorithms for future interactions, something that takes considerably more time, money and technical resources on systems that only use a single source of recognition, like speech recognition or NLP.
How agencies can get started with IVA
Agencies can easily add an IVA to their citizen engagement centers with a quick proof-of-concept. This process allows agency leaders to complete a robust test of the IVA, without having to overhaul to their infrastructure. At the very least, it can deliver valuable data and insights on what individuals are asking, gathering the true “voice of the citizen” and showing how agencies can improve their processes and refine the training of their live agents. By prioritizing the data and getting analytics through a test-and-learn approach, agencies can decide either to expand the IVA’s role in providing more self-service to improve the citizen experience in general or identify where to make future investments.
One of the advantages of trying out the IVA with a proof-of-concept is that it easily integrates with production telephony infrastructure, so scaling it up to full production to handle all of the agency’s calls is as simple as redirecting them to the IVA. Gone are the days of having to make an additional investment and perform an intensive implementation to take a proof-of-concept to a production-ready solution.
Ultimately, IVA serves as the crossroads of AI and human intelligence, enabling more seamless conversations through voice or text. In implementing an IVA, agencies can offer citizens more choices and true self-service while they gain better data analytics, insights and consistency across all channels in delivering an improved citizen journey.
NEXT STORY: NSF, Energy Department invest in AI research