A new way to measure the best supercomputers
The TOP500 list of the fastest supercomputers in the world has been a staple of computing for over 20 years. Universities, governments and even private companies compete to build the fastest supercomputers and earn a place on the list. How long a supercomputer can stay on the list is faithfully reported by technology media outlets around the world.
To rate one supercomputer against another, the TOP500 list uses the Linpack benchmark, which was created by Jack Dongarra, a professor at the University of Tennessee. The benchmark as worked well, but there are those who say that it no longer accurately represents true supercomputing power. The scientists working on supercomputer development at the Energy Department's Oak Ridge National Laboratory said much the same thing when I interviewed them earlier this year about the United States’ supercomputer development plans.
Now, Dongarra shares that sentiment, too. In a post on the University of Tennessee blog, he explains why Linpack no longer works as well as it once did.
"We have reached a point where designing a system for good Linpack performance can actually lead to design choices that are wrong for the real application mix, or add unnecessary components or complexity to the system," Dongarra said, according to the blog.
“The Linpack benchmark is an incredibly successful metric for the high-performance computing community,” Dongarra said. “Yet the relevance of the Linpack as a proxy for real application performance has become very low, creating a need for an alternative.”
The alternative is a new benchmark he and Sandia National Laboratories’ Michael Heroux are developing called the High Performance Conjugate Gradient. Dongarra explained the new benchmark measures how supercomputers are able to drive applications through their increasingly diverse makeup of CPU and GPU chips instead of just taking a raw power snapshot.
Data Center Knowledge reports the new HPCG won't replace Linpack, but will instead be used at the same time. The two together will determine how supercomputers place on the new TOP500 lists.
Given how competitive the agencies, governments and companies whose computers make the list seem to be, it’s a sure bet that using two benchmarks will be controversial, and give people ammunition to argue that they should be ranked higher.
But having benchmarked everyday nonsupercomputers for the past 15 years in the GCN Lab, I can say that benchmark technology has certainly changed. Where we used to use a standard benchmark that just looked at raw performance, we now make use of the Passmark Performance Benchmarks, which takes a much more well-rounded look at how a system is doing, component by component, while also considering how they are linked.
It makes sense that supercomputer benchmarks would also need some updating as the computing landscape changes. On another front, a group of supercomputing experts have started the Graph 500, to measure how well the machines handle big data, as well as the Green 500 to measure their energy efficiency.
With HPCG added to the test for overall performance, it will be interesting to see if, or by how much, the current rankings change once HPCG is factored in with Linpack.
Posted by John Breeden II on Jul 29, 2013 at 12:07 PM