DARPA plots supercomputing revolution

DARPA program aims to kickstart revolutionary high-performance technologies

A new research program is attempting to develop a new generation of power-efficient, space-saving supercomputers. The Defense Advanced Research Projects Agency has launched its Omnipresent High Performance Computing program, which seeks to develop breakthrough technologies in the areas of hardware, software, scalable input/output systems, programming models and low power circuits.

The goal of this and other related computing research efforts is to create new, compact supercomputers to support the Defense Department’s growing need for applications and processing capability. Such systems could rapidly manage and interpret the massive streams of sensor data generated by next-generation unmanned and manned platforms. These new computers could potentially be installed in individual vehicles or command centers to provide sensor fusion and analysis, and vastly increase the reaction and decision time of U.S. forces.


Related coverage:

DARPA investigates extreme supercomputing

Flash! Supercomputing goes solid-state


In its broad agency announcement issued June 21, DARPA stated that "current evolutionary approaches to progress in computer designs are inadequate." The agency stated that it wants to develop technologies to reduce the power requirements for high performance computers, including memory storage hierarchies; developing highly programmable systems to reduce operational complexity; and improving system dependability, managing component failure rates, and security issues including methods for sharing information and responsibility between the operating system, runtime system and applications.

The program also will research self-aware system software. This includes operating systems, runtime systems, I/O systems, system management and administration, resource management and external environments. DARPA also wants to study programming models that allow developers to more easily design in security, dependability, power efficiency and high performance.

Advances developed by the HPC program will complement DARPA’s Ubiquitous High Performance Computing program. According to the announcement, “The purpose of this effort is to accelerate the performance capabilities of UHPC program systems through selected, critical research and development activities that have high impact on ExremeScale computing and specifically UHPC program systems, up to but not necessarily including whole-system prototype development.” DARPA describes ExtremeScale systems as a computer that is a thousand times more powerful than a current comparable system with the same power and physical footprint.

The goals for the UHPC program include developing a petaflop supercomputer that fits into a single cabinet and runs a self-aware operating system. The effort also seeks to develop a prototype compiler to ease the programming for an ExtremeScale system and a dynamic system that adapts to achieve optimal application execution goals without the direct involvement of the application developer. DARPA plans to have a prototype UHPC computer by 2018.

Reader Comments

Tue, Jan 25, 2011 John Minijars USA

Sorry fellas, revolutionary is important but we cannot escape the importance of evolutionary as well. Some things just take time. Lots of it. Other things have to wait for basic science to catch up, like the Blue Brain project, simulating neurons on the IBM, Blue Gene supercomputer, or two of the latest language learning programs Carnegie’s NELL, or IBM’s Watson. Computer-program development at least appears to be a consciousness-required, human-language-intensive process. Which is one reason we have computer “languages” like C, Fortran, and Linux. I’m third-generation military, and a big fan of DARPA and saving lives on the battlefield. But I can’t see mere funding getting us over the consciousness-required hump anytime “soon.” Even with brain-equivalent neuron density, hardware emulation of “wet ware” is “no John Kennedy.” The two things are the ultimate in dissimilarity. I can only guess that to accomplish the same on hardware we would need is a series of gargantuan equations with symbolic calls to a data base of programming operands and data structures. And I’m no mathematician. I don’t even know if that’s possible, even if you could feed mayo to the tuna. AI will have lived up to its hype when computers can program themselves. Wouldn’t that be a kick in the in the ol’ bootstraps.

Mon, Jan 24, 2011 Avaard Zarro Texas

DARPA's "broad agency announcement" concerning High Performance Computing is tremendously encouraging. A careful look at the wording, "revolutionary new research, development, and design" and specifically requesting "revolutionary" over "evolutionary" approaches to developments for exascale computing hints at the kind of kick to motivation needed to release the creative juices of this nation, similar in spirit to both the Manhattan project and the man-on-the-moon project, though dissimilar, of course, in scale. In terms of innovation, we have been stuck on high center. Fully-automatic parallelization has been reduced to tools. Self-morphing code is much too limited to begin with and hindered by modern, CPU-cache architecture; as its cousin, self-evolving code, flounders in its own complexity. We need a drastically new way of thinking about the problem of teaching computers to teach themselves. Human programmers use innate, human grammar to think with human language, using symbols to create in code, finally producing software to run on predesigned hardware. It is a chain of events that limits developers to a frozen, stone-cold dead mindset. I remember the inspiring prediction, way back in the 80’s that programmers, such as myself, would be out of a job as soon as computers could be taught to program themselves. “Soon” didn’t happen. It seems to me, that same chain of events inside the human skull, used to create code in the first place, could be employed as a design philosophy for new architectures, new languages, and a totally new mindset. In other words, hard wire grammar into the architecture…”feed the mayo to the tuna” and we’re good to go.

Fri, Jan 21, 2011 Alva Zarro

The real lag time of the near future is software development. The complexities of programming for parallel computing are beyond human abilities to handle quickly. It seems that DARPA would be seeking a new type of computer architecture/language that could foster machines capable of building software for other machines. Imagine a petascale laptop with nothing to do because of the wait for someone to build the software package for it. The soldier in the field needs a machine that can think for itself, Self-programming. That is a DARPA-hard problem worthy of the name. z

Wed, Jul 7, 2010 Edmond Hennessy United States

Had to close my eyes for a moment, when the article headline popped-up. Probably, dating myself, however remember when DARPA - Bob Parker - drove the High-Performance Computing Initiative some 20+ years ago. It had lofty goals and targeted many of the existing manned and unmanned (developing) platforms - it was supportd by the High-Performance Continuum. Needless to say, it was embraced by certain pockets of Industry and was partially implemented and fulfilled. There was a lot of false-starts and blood-letting, though. This new DARPA Initiative is encouraging - and things have changed, however DARPA's appetite for developing technology solutions, beyond the realm and attracting Industry players that want to play - prevails. Not sure what the outcome will be, however DARPA is a sly fox and will find the way. Certainly, the target platforms and applications demand this stuff.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above