Argonne, Oak Ridge labs sweep HPC Challenge
- By Joab Jackson
- Nov 20, 2008
AUSTIN, Texas ' The Defense Advanced Research Projects Agency's High Performance Computing Challenge has recognized supercomputers at the Energy Department's Argonne National Laboratory and Oak Ridge National Laboratory for their superior performance.
The winners were announced at the SC 08 supercomputing conference held here this week.
Unlike the biannual Top500 list of supercomputers
, which uses a single metric
to rank the performance of supercomputers, the HPC Challenge evaluates machines in four categories, with each category representing the best performance for that benchmark. Program organizers say using a variety of tests to gauge specific aspects of supercomputer memory better reflects the range of ways supercomputers are used today.
"A programmer who needs to get good performance out of a supercomputer needs to be aware of this memory hierarchy," said Jeremy Kepner, an HPC Challenge contributor and a researcher at the Massachusetts Institute of Technology's Lincoln Laboratory.
Argonne's BlueGene/P, an IBM system with 163,840 cores, took two of the awards. One was for the category of global random access to memory
, a test that involves writing data to random parts of memory. It is measured in giga-updates per second (GUPS). BlueGene/P scored 103 GUPS.
Runners-up in that category were Lawrence Livermore National Laboratory's BlueGene/L machine, which executed 35 GUPS, and Sandia National Laboratories' Red Storm, a Cray XT3 machine that pumped out 34 GUPS.
The second category the Argonne BlueGene/P won was the global fast Fourier transform
(FFT) category. The test measures how quickly a system can execute a discrete Fourier transform, a method of transforming one function into another. BlueGene/P executed at a rate of 5,080 billion floating point operations per second (gigaflops).
Runners-up in the FFT category were Sandia's Red Storm, which stormed up 2,870 gigaflops for the job, and Oak Ridge National Laboratory's Jaguar, a Cray XT5 implementation running at 2,773 gigaflops.
Oak Ridge's Jaguar, with 150,152 cores, also topped two categories. One was for best performance in the High Performance Linpack (HPL) test
, in which it rocked 902 trillion floating point operations per second (teraflops).
Runners-up in that category were Lawrence Livermore's BlueGene/L at 259 teraflops and Argonne's BlueGene/P at 191 teraflops.
The other category Jaguar aced was the stream
category, which measures the sustainable rate at which data can be moved to and from memory to the processor. The Cray provided a stream of 330 terabytes/sec. Runners-up were Lawrence Livermore's BlueGeneL, which streamed 160 terabytes/sec, and Argonne's BlueGene/P, which delivered 130 terabytes/sec.
Funded by DARPA, the National Science Foundation and DOE, the HPC Challenge was developed to provide a well-rounded set of benchmarks to gauge supercomputer performance. Each of the seven benchmarks
developed for the HPC Challenge measure a different aspect of supercomputer performance.
The four benchmarks used for the latest challenge are the ones that most typify different aspects of performance in today's systems, the organizers say. The HPL benchmark tests the power of a system's processors; the stream benchmark tests the bandwidth of local memory, such as caches; the FFT benchmark tests the computer's overall bisection bandwidth; and the random memory access benchmark tests the latency of the network and the memory subsystems.
Robert Lucas, an HPC Challenge organizer who is director of computational sciences at the University of Southern California's Information Sciences Institute, said the Linpack test the Top500 uses might no longer be the one that most closely approximates the average workload on today's supercomputers. Instead, most high-performance computer applications make heavier use of random memory access, making that test a better approximation of HPC performance, he said.
Each prize came with a $750 cash award, provided by information technology analysis firm IDC.