Making AI-based language learning easy as child's play
- By Susan Miller
- Mar 29, 2019
Although language recognition software in call centers, translation apps and personal assistants has radically improved over the last decade, the machine learning algorithms used to train language recognition systems suffer from a lack of annotated training data, which limits the technology's accuracy and applicability. The technology is also "brittle," meaning it often has no way to manage new data sources, topics and vocabulary.
To overcome this limitation, the Defense Advanced Research Projects Agency wants to build an automated language acquisition system that learns language the way children do -- extracting meaning from hearing sounds while observing the environment.
The Grounded Artificial Intelligence Language Acquisition (GAILA) program aims to develop a prototype that can associate text or spoken input with images, video or virtual visual scenes of previously unseen entities and actions and produce English descriptions of events and relationships. It expects a system that sees a black table, a white table and a black chair, for example, to be able to describe a previously unseen object as a white chair, DARPA said in its solicitation.
Similar to how children learn about variations of word forms, the GAILA software would learn to describe events or actions (verbs), the entities that participate in those events (nouns) and the relationships among those entities and events (adjectives and phrases). It would also be able to distinguish between "pushing" and "throwing" and know that "rolling" can only apply to certain objects. It would understand that objects have functions (chairs are for sitting) and capabilities (containers hold things) and grasp indefinite and imprecise concepts such as near, tall or heavy.
DARPA sees several options for conducting this research. A prototype learning platform could leverage a 3D vision system, work in a custom-built virtual world, observe annotated movies and TV shows or tap into the Situations with Adversarial Generations dataset used to evaluate common sense natural learning inference tools.
Awards for the two-phase, 18-month project are limited to $500,000 for each phase.
GAILA is part of DARPA's Artificial Intelligence Exploration program that researches and develops “third wave” AI theory and applications that making it possible for machines to contextually adapt to changing situations.
Responses are due April 26. Read the full solicitation here.
Susan Miller is executive editor at GCN.
Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDG’s ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginia’s Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.
Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.
Connect with Susan at firstname.lastname@example.org or @sjaymiller.