Using AVs to tell friend from foe
In contested or occupied urban areas, deciding whether a person is waiting for a bus or scoping out a potential target often comes down to intuition.
Current artificial intelligence technologies, such as machine vision, may be able to pull clues from the complexity and ambiguity of city neighborhoods, but so far, they are unable to distinguish between threats and non-threats based solely on passive observation.
One way to elicit an indication of a threat is to stage an interaction and analyze the response. A guard might, for example, stop and question a person loitering near an installation to assess his intent. In some situations, however, encounters between military personnel and locals escalate existing tensions and work against the overall mission.
An autonomous vehicle (AV), on the other hand, could block the loiterer’s view and note if he moved to a new location to continue his observations or stayed seated waiting for the bus. Such strategies can give commanders the direct evidence they need to make decisions.
To further that idea, the Defense Advanced Research Projects Agency is looking for ways AVs can make it easier for commanders to detect and track threats among civilians in complex urban environments without escalating tensions.
DARPA’s Non-Escalatory Engagement to reduce Dimensionality (NEED) program aims to build a library of engagements or scenarios, in which autonomous aerial or ground vehicles interact with urban residents and test the question of whether an individual or group poses a threat.
DARPA initially wants NEED applicants to describe 10 engagements that may indicate AV-detectable threats such as setting an ambush, stoking violence, smuggling or planting explosives. Descriptions should explain how each engagement could generate specific evidence of a threat.
For each engagement submitted, DARPA wants performers to spell out the action’s purpose, type of autonomous vehicle involved, estimated level of escalation, expected responses and how those responses would be detected by AVs equipped with perception technologies. Those tools include cross-sensor re-identification (where a person can be re-identified by several different sensors), entity tracking, pose detection and processing of simple observed behaviors such as running or standing as well as compliance with simple requests, such as a request to turn around.
The engagements will be tested at the Urban Reconnaissance through Supervised Autonomy testbed, where DARPA experiments with sensors, artificial intelligence, drones and human psychology to better protect troops with technologies that can distinguish between threats and noncombatants. There, commanders will evaluate whether the engagements between perception-enabled AVs and urban residents elicit increased intelligence, surveillance and reconnaissance.
Rather than set an arbitrary target threshold for accuracy, DARPA said NEED solutions will be compared against baseline conditions in which humans are either entirely unaided, or aided only by passive machine detection.
More information on the opportunity can be found here, with more detailed discussion available on Ployplexis.
Connect with the GCN staff on Twitter @GCNtech.