NSF OKs supercomputer teragrid
NSF OKs supercomputer teragrid
- By Patricia Daukantas
- Aug 17, 2001
The National Science Foundation has given a green light to what could be the largest supercomputing infrastructure ever.
The $53 million Distributed Terascale Facility, also called the teragrid, will link four giant computer clusters running Linux over a 40-Gbps optical network.
The three-year award, announced this month, will fund a consortium led by two NSF-supported academic centers, the National Center for Supercomputing Applications in Urbana-Champaign, Ill., and the San Diego Supercomputing Center in California.
DTF will tie together four high-performance clusters with a combined theoretical peak speed of 11.6 trillion floating-point operations per second. Besides NCSA and SDSC, the teragrid will link more than 100 other institutions that have partnerships with the supercomputer centers under other NSF programs.
IBM Corp. will build the Linux clusters using Intel Corp.'s second-generation 64-bit Itanium microprocessor, code-named McKinley and slated for production in the first half of next year.
Qwest Communications International Inc. of Denver is the prime contractor for the optical network, which NCSA director Dan Reed said would run 16 times faster than today's speediest research network.
NCSA and SDSC, which have long functioned as national high-performance computing centers, will host the two largest DTF clusters.
NCSA's cluster, devoted to computer simulations, will have a peak speed of 6.1 TFLOPS and will be the world's largest Linux cluster, Reed said. Existing NCSA machines will supplement the cluster to reach 8 TFLOPS if necessary.
SDSC's 4-TFLOPS cluster will focus on data management and knowledge management across the teragrid.
The Energy Department's Argonne National Laboratory in Illinois and the California Institute of Technology in Pasadena will host smaller clusters. Argonne's 1-TFLOPS cluster will support high-resolution rendering and visualization software. Caltech's 0.4-TFLOPS cluster, supplanting a 32-node Linux cluster with 32-bit processors, will deliver scientific data to teragrid researchers.
Along with NASA, Energy and the Defense Advanced Projects Research Agency, NSF has been funding grid-computing research since the mid-1990s. An Argonne project called Globus has been developing middleware that enables distant researchers to tap into supercomputers.Defense interest
In November, a group led by Globus researchers will set up a grid to link a supercomputing conference in Denver with scientists around the world. The Defense Department's High-Performance Computing Modernization Program also has studied secure grid-computing methods.
'It takes a community to build a grid,' said Rick Stevens, an Argonne and University of Chicago scientist who will serve as DTF project director. 'This is something that's harder than any single institution could do alone.'
DTF, scheduled to start operating in mid-2002 and to reach peak performance in April 2003, will serve researchers in climate prediction, earthquake prediction, astronomy, gene sequencing and pharmaceuticals. Scientists in those fields are scattered across multiple locations, Reed said.
The research generates enormous quantities of data and requires new tools to collect and synthesize it, SDSC director Fran Berman said in announcing the award. She said she hopes the teragrid will bring new understanding of brain functions and cancer drug interactions with cells.
'We view the DTF as the beginning of the 21st-century infrastructure for scientific computing,' Reed said.
Besides IBM, Intel and Qwest, DTF partners include Oracle Corp., Sun Microsystems Inc. and high-speed networking company Myricom Inc. of Arcadia, Calif.