Leveraging the wisdom (and ignorance) of crowds
- By Susan Miller
- Feb 14, 2017
To improve intelligence analysts and decision makers’ understanding of the evidence and assumptions that support -- or conflict with -- their conclusions, the Intelligence Advanced Research Projects Activity announced funding to develop and test large-scale, structured collaboration methods to improve reasoning.
The Crowdsourcing Evidence, Argumentation, Thinking and Evaluation (CREATE) program will improve analysts’ ability to provide accurate, timely and well-supported analyses of complex, ambiguous and often novel problems facing the intelligence community, IAPRA said.
Besides marshalling facts and evidence, CREATE aims to give analysts a clearer understanding of conflicting evidence, knowledge gaps and degrees of uncertainty. CREATE systems aim to help analysts explain to decision makers why judgments were made, why seemingly plausible alternatives were rejected and the major gaps in what is known.
“CREATE will combine crowdsourcing with structured techniques to improve reasoning on complex analytic issues,” said IARPA Program Manager Steven Rieber, “The resulting technology will be valuable not just to intelligence analysis but also to science, law and policy -- in fact, to any domain where people must think their way through complex questions.”
Through a competitive broad agency announcement, IARPA awarded CREATE contracts to develop and test structured crowdsourcing platforms. Projects that received funding include:
SWARM -- Smartly-assembled Wiki-style Argument Marshalling. The University of Melbourne aims to develop a cloud-based platform that uses algorithms to create a statistical summary of a participant’s reasoning strengths and biases. This information could then be used to improve the collective outputs.
TRACE -- Trackable Reasoning and Analysis for Collaboration and Evaluation. Syracuse University will develop a web-based application that uses crowdsourcing to overcome common shortcomings in intelligence work by improving the division of labor and reducing both the systematic and random errors individuals may generate while promoting communication and interaction among teams.
Co-Arg -- Cogent Argumentation System with Crowd Elicitation. George Mason University is developing a software-based cognitive assistant for intelligence analysts that tests hypotheses, evaluates evidence, sorts facts from deception and provide intelligent reasoning about quickly evolving situations. It uses a web-based system called “Argupedia” that lets a lead analyst ask other experts to weigh in on small aspects of a hypothesis. Their arguments are assembled and weighted for relevance by the software.
Awards were also given to Monash University, the University of Melbourne, John Hopkins University Applied Physics Lab and Good Judgment, Inc.
Susan Miller is executive editor at GCN.
Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDG’s ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginia’s Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.
Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.
Connect with Susan at email@example.com or @sjaymiller.