Energy starts work on exascale supercomputer

Two Energy Department labs are building a supercomputer that will be capable of executing more than one quintillion floating-point operations/sec, or one exaflop, the department announced this week.

Sandia National Laboratories and Oak Ridge National Laboratory are collaborating on the system. Congress has allotted $7.4 million for the project in fiscal 2008.

The computer will work on tough scientific problems, such as modeling how large numbers of particles interact with one another.

'An exascale computer is essential to perform more accurate simulations that, in turn, support solutions for emerging science and engineering challenges in national defense, energy assurance, advanced materials, climate and medicine,' said James Peery, Sandia's director of computation, computers and math, in a statement.

The plan to build an exaflop computer is an ambitious one. In comparison, an exaflop is 1,000 times faster than a petaflop, which is 1,000 trillion flops.

No existing supercomputer system has achieved petaflop performance yet, though the National Science Foundation has funded IBM to build such a machine.

Today's fastest supercomputer, Lawrence Livermore National Laboratory's BlueGene/L System, has a processing speed of 478.2 trillion teraflops.

While designing this supermachine, Sandia engineers hope to tackle a problem inherent in many supercomputers: The growing disparity between theoretical peak performance and actual performance.

'We believe this can be done by developing novel and innovative computer architectures,' said Sudip Dosanjh, Sandia project lead, in a statement.

For instance, although the BlueGene/L System has demonstrated the ability to do 478.2 teraflops, it has the theoretical ability to do more than 596 teraflops.

One reason for such disparity, Dosanjh said, is that the supercomputer cannot shuttle data among all the processors quickly enough to keep them busy. Building a faster supercomputer usually means adding more processors to a system. But by using this approach, engineers must find a way to split the work among all the processors so that they all work continuously.

'In an exascale computer, data might be tens of thousands of processors away from the processor that wants it,' said Sandia computer architect Doug Doerfler in a statement. 'But until that processor gets its data, it has nothing useful to do. One key to scalability is to make sure all processors have something to work on at all times.'

Another challenge the team will work on is reducing power consumption by such a large machine.

'The electrical power needed with today's technologies would be many tens of megawatts ' a significant fraction of a power plant,' Dosanjh said. 'A megawatt can cost as much as a million dollars a year. We want to bring that down.'

Sandia, which issued the announcement, did not specify a date when the computer would be finished.

About the Author

Joab Jackson is the senior technology editor for Government Computer News.


  • Records management: Look beyond the NARA mandates

    Pandemic tests electronic records management

    Between the rush enable more virtual collaboration, stalled digitization of archived records and managing records that reside in datasets, records management executives are sorting through new challenges.

  • boy learning at home (Travelpixs/

    Tucson’s community wireless bridges the digital divide

    The city built cell sites at government-owned facilities such as fire departments and libraries that were already connected to Tucson’s existing fiber backbone.

Stay Connected