Top500 to revamp qualifications
Briefer test of supercomputer power is needed, judges admit
- By Joab Jackson
- Nov 19, 2008
AUSTIN, Texas'The benchmark used to gauge the performance of supercomputers for the Top500 is growing too unwieldy, the organizers admit. They want to modify the test so that it less arduous and are soliciting ideas of how it could be done.
"It's clear this is a problem. It is getting out of control at this point, so we have to do something," said Jack Dongarra, one of the Top500 officials, at a session held SC08 conference, beheld this week in Austin. "We'll make some changes to the benchmarks. We don't know what those changes are, yet. [So] we're looking for feedback from the community to determine what those changes should be."
The Top500 is a biannual list ranking the 500 most powerful supercomputers. Participation in the Top500 list is voluntary. Organizations submit their benchmarks for inclusion. Researchers at the University of Mannheim, Germany; Lawrence Berkeley National Laboratory; and the University of Tennessee, Knoxville, compile the list.
The 32nd edition of the list was posted earlier this week
The computers are ranked by how quickly they run a complete iteration of something called the Linpack test. Linpack is a collection of subroutines that solve linear equations.
Using all the memory and processors on a given machine, Linpack "is a stress test for the machine. It stresses how well it can function over a long period of time," Dongarro said. "You're encouraged to run the biggest problem you can."
In this most recent test, Los Alamos National Laboratory's Roadrunner, an IBM machine, topped the list, achieving 1.1 petaflops, followed closely by Oak Ridge National Laboratory's Cray-supplied Jaguar, which clocked in at slightly faster than 1.05 petaflops.
Dongarra noted that as supercomputers continue to grow in size, the time it takes to complete the test is growing as well'and may soon grow beyond manageable proportions.
For instance, the Roadrunner took about 2 hours to complete the Linpack test. And Oak Ridge's Jaguar, which has more memory, took 18 hours to run through its Linpack test. "Eighteen hours is an incredible amount of time for a machine to stay up, especially when it was just installed," Dongarra said.
Moreover, many organizations like to run multiple iterations of the test, changing the machine configuration before each test to try to improve times.
As machines continue to grow in size, so too will the length of the tests they run. Extrapolating the current rate of growth, in five years, we may have machine capable of five petaflops. A full Linpack test on such a machine could take two and a half days to run.
"That is a ridiculous amount of time to expect somebody to isolate a machine and run a benchmark," Dongarra said.
While the testing committee wants to keep using Linpack so historical comparisons could still be made with older tests, they would like to modify either the running of the Linpack test, or how that test is reported.
Dongarra floated the idea of entrants running only a portion of the full Linpack test. For instance, only slices of the test could be captured, as in a Monte Carlo simulation, and the full capability would be extracted from these samples.
But this approach has a number of challenges. How long should the test run? Also what portions of the test should be recorded? Once a test is set into motion, the machine will hit the maximum capability at some indeterminate point thereafter. And, due to the nature of the computations, performance slowly declines after a period of time. Limiting the test to certain slices of time may unfairly discount some scores.
Dongarra, a distinguished research staff at the Energy Department's Oak Ridge National Laboratory, as well as a distinguished professor at University of Tennessee, is welcoming other ideas
on how to revamp the test.
Joab Jackson is the senior technology editor for Government Computer News.