Emerging Tech

Blog archive
New ways to measure supercomputer performance

A new way to measure the best supercomputers

The TOP500 list of the fastest supercomputers in the world has been a staple of computing for over 20 years. Universities, governments and even private companies compete to build the fastest supercomputers and earn a place on the list. How long a supercomputer can stay on the list is faithfully reported by technology media outlets around the world.

To rate one supercomputer against another, the TOP500 list uses the Linpack benchmark, which was created by Jack Dongarra, a professor at the University of Tennessee. The benchmark as worked well, but there are those who say that  it no longer accurately represents true supercomputing power. The scientists working on supercomputer development at the Energy Department's Oak Ridge National Laboratory said much the same thing when I interviewed them earlier this year about the United States’ supercomputer development plans.

Now, Dongarra shares that sentiment, too. In a post on the University of Tennessee blog, he explains why Linpack no longer works as well as it once did.

"We have reached a point where designing a system for good Linpack performance can actually lead to design choices that are wrong for the real application mix, or add unnecessary components or complexity to the system," Dongarra said, according to the blog.

“The Linpack benchmark is an incredibly successful metric for the high-performance computing community,” Dongarra said. “Yet the relevance of the Linpack as a proxy for real application performance has become very low, creating a need for an alternative.”

The alternative is a new benchmark he and Sandia National Laboratories’ Michael Heroux are developing called the High Performance Conjugate Gradient. Dongarra explained the new benchmark measures how supercomputers are able to drive applications through their increasingly diverse makeup of CPU and GPU chips instead of just taking a raw power snapshot.

Data Center Knowledge reports the new HPCG won't replace Linpack, but will instead be used at the same time. The two together will determine how supercomputers place on the new TOP500 lists.

Given how competitive the agencies, governments and companies whose computers make the list seem to be, it’s a sure bet that using two benchmarks will be controversial, and give people ammunition to argue that they should be ranked higher.

But having benchmarked everyday nonsupercomputers for the past 15 years in the GCN Lab, I can say that benchmark technology has certainly changed. Where we used to use a standard benchmark that just looked at raw performance, we now make use of the Passmark Performance Benchmarks, which takes a much more well-rounded look at how a system is doing, component by component, while also considering how they are linked.

It makes sense that supercomputer benchmarks would also need some updating as the computing landscape changes. On another front, a group of supercomputing experts have started the Graph 500, to measure how well the machines handle big data, as well as the Green 500 to measure their energy efficiency.

With HPCG added to the test for overall performance, it will be interesting to see if, or by how much, the current rankings change once HPCG is factored in with Linpack.

Posted by John Breeden II on Jul 29, 2013 at 12:07 PM


  • Congress
    Rep. Jim Langevin (D-R.I.) at the Hack the Capitol conference Sept. 20, 2018

    Jim Langevin's view from the Hill

    As chairman of of the Intelligence and Emerging Threats and Capabilities subcommittee of the House Armed Services Committe and a member of the House Homeland Security Committee, Rhode Island Democrat Jim Langevin is one of the most influential voices on cybersecurity in Congress.

  • Comment
    Pilot Class. The author and Barbie Flowers are first row third and second from right, respectively.

    How VA is disrupting tech delivery

    A former Digital Service specialist at the Department of Veterans Affairs explains efforts to transition government from a legacy "project" approach to a more user-centered "product" method.

  • Cloud
    cloud migration

    DHS cloud push comes with complications

    A pressing data center closure schedule and an ensuing scramble to move applications means that some Homeland Security components might need more than one hop to get to the cloud.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.