Can tech prevent cultural misunderstandings?
- By Susan Miller
- Apr 30, 2021
To avoid the kind of issues that can arise from misunderstanding local dialects and social practices, the Defense Advanced Research Projects Agency is looking for help building natural language processing technologies that will help the military better communicate across languages and societies.
DARPA’s Computational Cultural Understanding (CCU) program aims to create cross-cultural language understanding technologies to improve the situational awareness and interactional effectiveness of Defense Department personnel.
Today’s language translation applications are inadequate for cross-cultural communication assistance, DARPA said in its April 29 presolicitation. The agency is looking for solutions that will be as good as human interpreters, whose cultural insights often are critical to the translation process. CCU research will deliver the foundational technical innovations negotiators and analysts need for cross-cultural dialogue in the field.
CCU has three technical areas. The first focuses on building foundational knowledge from conversational standards:
- Automatic discovery of the sociocultural norms that influence conversations from unlabeled discourse data in six languages.
- Universal emotion recognition from language, facial expressions and tone of voice so insights can be applied across cultures without the use of training data.
- Identification of shifts in emotional expressions and tone that are indicative of miscommunication.
Leveraging the communication norm analysis, researchers then are also asked to develop dialogue assistance services that will help prevent or remediate communication problems. These tools would follow ongoing conversations, detect misunderstandings and discord in real time from speech and facial expressions and suggest socially appropriate fixes. These services must be able to infer sociocultural settings from language and images, revise operator utterances to improve effectiveness and incorporate culture-independent techniques that can be generalized to approximately six languages.
A third technical area involves collection and labeling of a portion of the text, audio and video conversations to be used for the analysis and algorithm development described in the first two tasks. The annotated conversations should indicate the sociocultural norms and emotional states of the participants, including the strength of the emotions expressed.
Generally, DARPA expects CCU technologies will require minimal-to-no training data in a local culture to understand language and behaviors in context. Rather, CCU will leverage psychology, sociology or other relevant disciplines, along with minimally-supervised machine learning to give operators the best chance of success during negotiations and other interactions in the field.
Researchers can use smart glasses to collect audio and visual observations of conversation participants. Their conversation assistance algorithms will be built into a laptop and smartphone form factor for demonstration purposes.
Prototypes will be assessed against gold-standard human interpreters in operationally-relevant negotiation scenarios common in the military.
Responses are due June 16.
Susan Miller is executive editor at GCN.
Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDG’s ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginia’s Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.
Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.
Connect with Susan at [email protected] or @sjaymiller.