Energy starts work on exascale supercomputer

Two Energy Department labs are building a supercomputer that will be capable of executing more than one quintillion floating-point operations/sec, or one exaflop, the department announced this week.

Sandia National Laboratories and Oak Ridge National Laboratory are collaborating on the system. Congress has allotted $7.4 million for the project in fiscal 2008.

The computer will work on tough scientific problems, such as modeling how large numbers of particles interact with one another.

'An exascale computer is essential to perform more accurate simulations that, in turn, support solutions for emerging science and engineering challenges in national defense, energy assurance, advanced materials, climate and medicine,' said James Peery, Sandia's director of computation, computers and math, in a statement.

The plan to build an exaflop computer is an ambitious one. In comparison, an exaflop is 1,000 times faster than a petaflop, which is 1,000 trillion flops.

No existing supercomputer system has achieved petaflop performance yet, though the National Science Foundation has funded IBM to build such a machine.

Today's fastest supercomputer, Lawrence Livermore National Laboratory's BlueGene/L System, has a processing speed of 478.2 trillion teraflops.

While designing this supermachine, Sandia engineers hope to tackle a problem inherent in many supercomputers: The growing disparity between theoretical peak performance and actual performance.

'We believe this can be done by developing novel and innovative computer architectures,' said Sudip Dosanjh, Sandia project lead, in a statement.

For instance, although the BlueGene/L System has demonstrated the ability to do 478.2 teraflops, it has the theoretical ability to do more than 596 teraflops.

One reason for such disparity, Dosanjh said, is that the supercomputer cannot shuttle data among all the processors quickly enough to keep them busy. Building a faster supercomputer usually means adding more processors to a system. But by using this approach, engineers must find a way to split the work among all the processors so that they all work continuously.

'In an exascale computer, data might be tens of thousands of processors away from the processor that wants it,' said Sandia computer architect Doug Doerfler in a statement. 'But until that processor gets its data, it has nothing useful to do. One key to scalability is to make sure all processors have something to work on at all times.'

Another challenge the team will work on is reducing power consumption by such a large machine.

'The electrical power needed with today's technologies would be many tens of megawatts ' a significant fraction of a power plant,' Dosanjh said. 'A megawatt can cost as much as a million dollars a year. We want to bring that down.'

Sandia, which issued the announcement, did not specify a date when the computer would be finished.

About the Author

Joab Jackson is the senior technology editor for Government Computer News.


  • business meeting (Monkey Business Images/

    Civic tech volunteers help states with legacy systems

    As COVID-19 exposed vulnerabilities in state and local government IT systems, the newly formed U.S. Digital Response stepped in to help. Its successes offer insight into existing barriers and the future of the civic tech movement.

  • data analytics (

    More visible data helps drive DOD decision-making

    CDOs in the Defense Department are opening up their data to take advantage of artificial intelligence and machine learning tools that help surface insights and improve decision-making.

Stay Connected