Making sense of exaflops
How does a typical PC compare to supercomputing's dizzying performance numbers?
FLOPAPALOOZA. The ongoing game of “Can You Top This?” by the practitioners of high-performance computing can be fun to watch, even if it’s getting hard to comprehend.
China recently shook up the game with its Dawning Nebulae supercomputer, which sustained 1.27 petaflops — 1.27 quadrillion floating point operations/sec — on the Linpack benchmark. That placed it at No. 2 on the Top 500 list of fastest machines, breathing down the neck of Oak Ridge National Laboratory’s Jaguar, which sustained 1.75 petaflops.
China threatens U.S. supercomputing supremacy
DARPA plots supercomputing revolution
The Chinese seem to want the top ranking so bad they can taste it. But U.S. government organizations have other ideas. The Defense Advanced Research Projects Agency recently announced its Omnipresent High Performance Computing project, aimed at leapfrogging petascale computing and going for exascale — 1 quintillion flops. Meanwhile, the Energy Department and several other agencies are hoping for approval of their own Exascale Computing Initiative.
Fine. But how does the average user relate to a quadrillion, which is a one followed by 15 zeros, or a quintillion, a one followed by18 zeros, flops? Perhaps by considering the PC on your desk?
Flops aren’t a good measure of a PC’s performance because that type of measure doesn’t take into account input/output, RAM, motherboard speed and other factors. But it does measure raw execution rate of a processor. Intel even includes flops in the specifications for its chips. So, for the sake of argument, let’s say your PC has a 3.33 GHz Core 2 Duo E8600 processor, which is rated at 26.64 gigaflops, or 26.64 billion flops.
How does that stack up with Jaguar? In flop terms, Jaguar is equal to roughly 65,691 of those PC processors. To get to an exaflop? It compares to a mere 37.5 million of those chips.
Glad we cleared that up.