- By Patricia Daukantas
- Mar 03, 2004
University taps atypical platform to create a hand-built supercomputer
Virginia Tech's Power Mac G5 supercomputer architects, from left, are Srinidhi Varadarajan, Kevin Shinpaugh, Glenda Scales, Jason Lockhart and Patricia Arvin. In three months, they assembled the world's third-fastest computer for $5.2 million.
Courtesy of Virginia Tech
With one significant, and somewhat surprising, accomplishment already to their credit, researchers at Virginia Tech are building the infrastructure for a new, highly networked Terascale Computing Facility they hope will win funding from an upcoming National Science Foundation program.
With a stringent $5 million budget, the researchers took the unprecedented step of building a supercomputer from Apple Computer Inc. G5 systems, said Jason Lockhart, associate director of the terascale center.
The hand-built supercomputer was dubbed System X to reflect the university's design goal: 10 trillion floating-point operations per second. The research team ended up spending $5.2 million on what is now the world's third-fastest computer.
'We had a minimal budget, in supercomputing circles,' Lockhart said. Systems that have cracked the upper levels of the semiannual Top 500 list of fast computers typically cost $25 million to $200 million.
Virginia Tech'officially the Virginia Polytechnic Institute and State University'wanted 64-bit processors and maximum memory per processor within budget. Last spring, the researchers shopped around among 64-bit supercomputer vendors and third-party integrators. Lockhart said all the proposals fell short in either price or floating-point performance.
'We were at wit's end by the middle of June,' he said.
That month, Apple chief executive officer Steve Jobs announced the G5 with dual IBM PowerPC 970 processors.
'We knew it was a superior processor, we just didn't know when the product was going to be released,' Lockhart said.
Within days of the announcement, the university struck a deal with Apple, 'and we were off and running,' Lockhart said. 'Things went really fast, and they're not exactly slowing down.'
Typically it takes 12 to 18 months to get a supercomputer benchmarked and running. The Virginia Tech team took less than three months.
They powered up the full system on Sept. 24 for benchmarking, Lockhart said. There was less than a week to get out some initial numbers for the November edition of the Top 500 list.
Srinidhi Varadarajan, the project architect and Terascale Computing Facility director, put in about three weeks of 20-hour days to get the benchmarking optimized and clear the 10-TFLOPS mark, Lockhart said. 'He was literally sleeping in the conference room.'
The G5 computers reside in large, black racks custom-built by Liebert Corp. of Columbus, Ohio. System X was the first-ever installation of the company's rack-mounted cooling system, called X-Treme Density cooling. Liebert designed 96 racks to hold 12 G5 computers each.Off the truck
System X has 2,200 2-GHz processors arranged as 1,100 dual-CPU nodes. Mellanox Technologies Inc. of Santa Clara, Calif., made the InfiniBand switches and the host control adapter cards that volunteers installed on the Power Mac G5s. 'We set up a makeshift assembly line that processed the machines as they arrived off the trucks,' Lockhart said.
More than 150 volunteers provided labor, mostly undergraduate and graduate students, faculty and staff members plus some local users who wanted to help out.
'It was great to see this incredible outpouring of folks that wanted to help with this,' Lockhart said. For their efforts, the volunteers received free pizza and T-shirts announcing, 'My other computer is an 1,100-node G5 cluster.'
Unlike other scalar processors, the PowerPC 970 has a 32-bit single-precision vector processor, called the AltiVec Velocity Engine, co-located with the 64-bit chip. Virginia Tech, however, did not make use of the vector processor for the benchmark that the Top 500 organizing committee requires.
The first hand-built supercomputing clusters, known as Beowulf systems, started out as collections of old PCs running Linux. System X runs Mac OS X, a Posix-compliant variant of Unix.
To connect with the host channel adapters, the operating system uses MVAPICH, a message-passing interface for InfiniBand, developed at Ohio State University.
'What we've done is show that, even with a modest amount of money, you can build a machine capable of tremendous performance,' Lockhart said. Apple has 'a platform now that's not a little kiddie computer anymore.'
In late January Virginia Tech announced that it will migrate from the original Power Mac G5 platform to Apple's new Xserve G5 rackmount server only 1.75 inches (1U) high.
The original Power Mac G5 units were designed for desktop use, but Apple optimized the Xserve G5 for cluster use, Virginia Tech spokeswoman Lynn Nystrom said.Shrinking footprint
Swapping out the Power Macs for the same number of Xserves, which also have dual PowerPC processors, will consume less electrical power and cooling. The smaller Xserve units will shrink the System X footprint from 3,000 square feet to 1,000.
When Virginia Tech built its system last fall, Apple wasn't yet in the high-performance computing business, Nystrom said. After System X's success, Apple is fielding inquiries from government agencies and industry.
'We've made a believer out of them in terms of what you can do with a cluster,' Nystrom said.
The efficiency rating of System X now stands at 58.4 percent, but with more optimization, the team hopes to reach 60 percent by the testing deadline for the June 2004 Top 500 list.
'We think 12 TFLOPS is not out of reach,' Lockhart said.
System X will support studies in fluid dynamics, computational chemistry, large-system dynamics and many other areas, Lockhart said.
Virginia Tech is aiming for a facility along the lines of the San Diego Supercomputing Center and the National Center for Supercomputing Applications in Champaign, Ill. System X will be the cornerstone, but Virginia Tech will need an infrastructure of smaller computation clusters, high-performance storage systems, computers optimized for data visualization and support personnel.
The university plans to connect the terascale facility to the upcoming National Lambda Rail, a 40-Gbps fiber-optic research network that will eventually supersede the Internet 2 project, Lockhart said. The supercomputer builders also are watching the NSF program that will eventually succeed the Partnerships for Advanced Computational Infrastructure, which funds the two major supercomputing centers.
A blue-ribbon advisory panel recently recommended that NSF create an Advanced Cyberinfrastructure Program with an additional $1 billion per year for computational science.