NASA: Prize money a bargain for better software
In October 2010, NASA and the Harvard Business School launched the NASA Tournament Lab, an online platform for contests between independent programmers who compete to create software and algorithms and solve computational problems.
Both NASA scientists and Harvard academics have an interest in the subject of open innovation, the use of ad hoc groups such as open-source software communities or wikis to produce solutions outside of traditional organizations.
NASA’s interest is practical. “We’re always looking at ways to fill gaps in our technical capabilities,” said Jason Crusan, chief technologist for NASA’s human exploration operations. NASA has experimented with crowdsourcing and other development techniques, and the Tournament Lab is the latest in these efforts.
Will competitive coding replace in-house development?
Google awards grant to foster civic apps
Harvard’s interest is academic. “My research is focused on how innovation happens outside of the formal organization,” said Karim Lakhani, assistant professor of business administration at Harvard. His work is plagued by a lack of real-world data on the comparative merits of different models of open innovation.
NASA researchers with complex computational problems now can use the Tournament Lab to order up a program or an algorithm for a modest amount of prize money. NASA gets operational software at bargain prices; Harvard gets real-world data for academic studies on how collaborative and competitive communities work. The programmers get real-world experience, street cred and some cash.
The idea of commercial, competitive software development is not new. NASA’s Tournament Lab is hosted by TopCoder, an online company that brings together customers with problems to solve with a virtual community of more than 320,000 programmers around the world who compete to solve these problems for cash prizes. The NASA lab draws on these resources but is designed so that its competitions can be structured to provide academic results as well as software. So far, the results have been encouraging.
“We didn’t think we would have as high a success rate as we’ve had,” Crusan said. “There are a lot of smart people in the world.”
On your marks, get set, code
The first challenge presented in the lab was developing an algorithm to optimize the contents for the medical kits that accompany astronauts on missions. It might sound like a trivial exercise, but there are a lot of variables involved and the stakes are high. The kit contains the only medical resources available in space and the mass and volume of the kit is strictly limited. The contents must also take into account both expected and unexpected problems and reflect the specific requirements of each mission and crew, long-term and short-term.
The challenge was to develop an algorithm that addressed all of these issues, trading off the mass and volume of each item while ensuring sufficient resources to minimize the likelihood that a medical problem would terminate a mission.
NASA already had an algorithm but was looking for one that was more efficient. Harvard put up the prize money, NASA developed the specifications, and 516 coders working in groups of about 20 each competed. Individuals in each group did not collaborate but competed with each other. A total of $1,000 in prize money was awarded to the top five performers in each group: $500 for first, $200 for second, $125 for third, $100 for fourth and $75 for fifth place.
The contest produced 549 submissions over two weeks. The best submission was more effective than NASA’s previous algorithm by a factor of three, and NASA is now using it. “We got very high quality ideas in a short period of time,” Crusan said.
The program has been successful enough that TopCoder recently announced the first competition on the NASA Tournament Lab sponsored by another agency. The Patent and Trademark Office is offering $50,000 in prizes for the development of algorithms to help digitize an archive of 7 million patents by recognizing and classifying images from patent documents. The contest was scheduled to run from Dec. 16 through Jan. 16, and as an added incentive each competitor will receive a limited edition NASA Tournament Lab t-shirt.
The competitive coding market
The competitive model of coding, although it falls under the same rubric of “open innovation,” is in many ways the antithesis of the open software model. Where open software communities are collaborative and donate their efforts, competitions primarily are profit-driven. The idea of academic and creative contests with cash prizes is not new, Lakhani said. Prizes offered in aeronautics and aerospace have spurred developments in transatlantic flight and private spacecraft over the last century. But in the last decade the idea of competitive coding has been commercialized and a market is developing for it.
TopCoder expects to post competitions with $7 million in prize money this year.
"We have contests for everything for coming up with an idea or a conceptualization, to substantive design and development process,” said TopCoder president Robert Hughes. The company acts as a middle man, providing its community of independent coders with a forum for work and charging customers for access to the community. Prizes offered range from $25 for minor updates and documentation to as much as $10,000 for “marathon matches” such as those for NASA to develop new programs or algorithms that significantly advance current capabilities.
Although the prizes typically are modest, coders who consistently win can make good money. “One guy won $1 million in seven years,” Hughes said. “He has become something of a rock star in the community.”
The community is skewed toward grad students seeking real-world experience as well as money, Hughes said, although there is a wide variety of experience represented. It is an international group, with the Philippines and Southeast Asia tending to dominate in the graphics areas and Russians in component and back-end design. The challenge for TopCoder is to make its model not only attractive to competent coders but also credible to customers who will be using the code.
“It’s required to be completely transparent,” Hughes said of the development process. Competitors document their work, and everything is available for evaluation, both automated and peer reviewed. Work typically is triaged with an automated assessment upon submission to find obvious weaknesses. Finalists usually are evaluated by peer reviewers. Contestants also are ranked according to their participation and performance.
“The reviewers are selected by us from a pool of trusted members” whose work has demonstrated their reliability, Hughes said. The customer also can evaluate results before awarding a prize. Intellectual property rights for winning submissions go to the customer.
Other innovation models
Competition is not the only innovative model NASA has experimented with. The agency turned to crowdsourcing for a program to identify, characterize and count lunar craters in NASA images. The project worked well on a set of 200,000 images, Crusan said.
“The problem is, we have over 2 billion images and growing every day,” he said. Tackling that problem required an algorithm to detect and analyze craters. A challenge in the Tournament Lab produced 310 submissions in two weeks. “Now we have a solution that is returning about 75 percent accuracy on crater identification. We’re using it as a starting point to accelerate our algorithm development.”
Another image detection challenge produced nearly 550 submissions for analyzing satellite imagery of terrestrial pipelines. “We’ve had a high level of success in every challenge we’ve held,” Crusan said. NASA has plans to take the lab to the next level. “Now we’re getting into how to write whole applications,” such as a portable EKG app for a table computer to be used in orbit.
The Tournament Lab so far appears to be producing high-quality results in short periods of time at a low cost, but “we can’t empirically say that yet,” Crusan said, because the data for comparison still is being gathered. “We’re still in the learning phase.”
But some information is beginning to emerge. Lakhani has managed to solve one paradox in competition coding theory: Economic theory predicts that small competitions are likely to produce better results because each contestant will have a greater chance of winning in a small group and will make a greater effort. But behavioral theory predicts that a larger group of contestants, spurred by greater competitive stimuli, is more likely to produce a favorable outcome.
It turns out that the economic theory is right, at least most of the time, Lakhani said. In larger groups of competitors individual performance drops off rapidly. But with complex problems requiring expertise in multiple knowledge domains, large groups seem to work better.
Whatever the final results of the pilot, the NASA Tournament Lab is working as intended. “We deliver software to NASA that is operational, and we as academics can do our research,” Lakhani said.
William Jackson is a senior writer of GCN and the author of the CyberEye blog.