Civilian lab researchers gain computing edge

Civilian lab researchers gain computing edge

By Patricia Daukantas

GCN Staff

A 6-TFLOPS supercomputer at the Pittsburgh Supercomputing Center will plug high-level research into the National Science Foundation's very-high-speed Backbone Network Service and the Abilene research network, the precursor of Internet 2.

Terascale system will give zing to modeling
Los Alamos Lab's Cray C90

Earth's magnetic field with inward, blue, and outward, orange, field lines
Pittsburgh Supercomputing Center's SGI Origin2000

HIV's reverse transciptase ribbon with attached inhibitor drug in yellow
National Center for Atmospheric Research's Cray C90

Atmospheric turbulence caused by positive, reddish, and negative, greenish, vortices

National Science Foundation officials say the Terascale Computing System will benefit researchers working on weather forecasting, fluid dynamics, theoretical physics and computational chemistry simulations.

NSF this month awarded a $45 million, three-year contract for the center and its vendor partner, Compaq Computer Corp., to build a massive system that executes more than 1 trillion floating-point operations per second on sustained scientific calculations.

The Terascale Computing System will host research projects on a scale previously possible only for defense and weapons scientists at national laboratories.

The theoretical peak speed is about 6 TFLOPS, said Robert R. Borchers, director of advanced computational infrastructure research in NSF's Directorate for Computer and Information Science and Engineering.

But real-world calculations usually achieve only up to 20 percent of the hypothetical rate because of communication delays between processors.

Nevertheless, theoretical peak speed is one of the few yardsticks for measuring supercomputers. The Pittsburgh machine will join an elite but growing group of massive systems at Energy Department weapons laboratories and Defense Department research labs [GCN, July 3, Page 1, and June 12, Page 61].

As proposed by the Pittsburgh center and Compaq, the Terascale Computing System will have 2,728 1.1-GHz Compaq Alpha EV68 processors grouped into 682 four-processor AlphaServer nodes.

Each node will have 4G of RAM for a systemwide total memory of 2.7T. High-bandwidth interconnect switches from Quadrics Supercomputing World Ltd. of Bristol, England, will link the processors. Compaq's Tru64 Unix is the operating system.

The contract allocates $36 million for the system and $3 million per year for operating costs, Borchers said.

The first portion of the system is scheduled to start running in February, and the whole supercomputer should be fully operational by the end of next year.

The Pittsburgh-Compaq proposal beat out four competing bids during a multistage review process earlier this year, Borchers said. He called the choice clear in terms of performance units per dollar and probable sustained performance.

Jointly operated by Carnegie Mellon University and the University of Pittsburgh, the center houses its staff in Pittsburgh and its computers and storage systems at Westinghouse Government Services Co. in suburban Monroeville, Pa.

Part of NSF's supercomputing program for more than a decade, the Pittsburgh center has acquired many systems with Alpha processors, including the first production models of the Cray T3D and T3E.

In 1997, however, the Pittsburgh center was left out of NSF's revamped Partnerships for Advanced Computational Infrastructure (PACI) program.

Since then, it has been pursuing research with funding from Energy, the National Institutes of Health, the Commonwealth of Pennsylvania and other entities.

The terascale system fits in well with the PACI program, NSF Director Rita Colwell said. The latest award will have the effect of integrating the Pittsburgh center into PACI as a third leading-edge site, alongside the supercomputing facilities in Illinois and San Diego.

The new system will be six times faster than any other machine in the PACI program, Borchers said.

Research advances

As at other NSF-funded supercomputer centers, scientists who want to use the Terascale Computing System must submit proposals to a review board that meets twice a year.

NSF and center officials cited weather and storm forecasting, fluid dynamics, theoretical physics and computational chemistry as examples of research areas that will benefit from the increased computing power.

For example, University of Oklahoma meteorologist Kelvin Droegemeier's models for predicting tornadoes and thunderstorms scale to thousands of processors, Borchers said.

Recently, chemist Peter Kollman and his colleagues at the University of California at San Francisco have used the Pittsburgh center's Cray supercomputers to extend protein-folding simulations from a billionth of a second to a millionth of a second of real time. The simulations were so large that they had to be carried out over multiple computers, said Ralph Roskies of the University of Pittsburgh, one of the center's two scientific directors.

With a single machine as large as Pittsburgh's available, researchers within a few years may come up with novel uses, said center scientific director Michael Levine of Carnegie Mellon University.

In its budget request for fiscal 2001, NSF has asked for another $45 million for a second terascale computer, Borchers said. Without explicitly ruling out an encore bid from Pittsburgh and Compaq, he said, the foundation will seek 'both vendor and operator diversity' in a second contract.

Compaq is relatively new to supercomputing, although it gained expertise through its acquisitions of Alpha developer Digital Equipment Corp. and Tandem Computers Inc.

Compaq has placed a 512-processor AlphaServer at Energy's Lawrence Livermore National Laboratory in California and 256- and 64-processor systems at Oak Ridge National Laboratory in Tennessee. France's atomic energy agency has ordered an AlphaServer with a theoretical top speed of 5 TFLOPS, Borchers said.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.