Experts want supercomputing resources pooled
Following the Energy De-partment's announcement of a new supercomputer designed to be the world's fastest, the White House and Congress are demanding better coordination of federal supercomputing resources.
The White House's Office of Science and Technology Policy has recommended that agencies with supercomputers should share resources. And House legislation calls for greater interagency collaboration, as well as central oversight of federal supercomputing resources.
Last year, the White House formed a High-End Computing Revitalization Task Force of representatives from agencies and industry to draw a roadmap for federally funded supercomputer development.
The task force's report, Federal Plan for High-End Computing, recommended that agencies with supercomputers coordinate their resources. It also outlined broad areas needing development.
The High-Performance Computing Revitalization Act, HR 4218, introduced by Rep. Judy Biggert (R-Ill.), would establish an interagency advisory committee to oversee supercomputing development and deployment.
Agencies are failing to find the computational power they need for large research projects, John H. Marburger, director of the White House policy office, told a House Science Committee hearing recently. The drive toward clustering commercial servers has shrunk the high-performance market, driving up costs.
Dan Reed, a professor at the University of North Carolina at Chapel Hill, told the science committee that some problems cannot be solved at all by the current generation of computers.
Supercomputers should be treated as national resources available to all agencies and their constituents, Marburger said.
Most government supercomputers aren't freely available to outside researchers, other speakers testified. Rick Stevens, director of the National Science Foundation's TeraGrid project, said current policy bars NSF from giving outside researchers access to its own supercomputers.
The Energy Department's new supercomputer at Oak Ridge National Laboratory in Tennessee could operate as a service for other agencies, as well as for private industry, speakers said.
Energy gave Oak Ridge $25 million to develop a computer with a sustained capability of 50 trillion floating-point operations per second and a peak capability of more than 250 TFLOPS.
It will be available for unclassified research by academic institutions and private companies, said Ray Orbach, director of Energy's Office of Science. Use will be free so long as the results are published in a public forum, Orbach said.Five-year plan
The total expected cost will run from about $150 million to $200 million, and the machine should be fully operational within five years, Orbach said.
Oak Ridge will roll it out in phases, with the first component online later this year, said Michael Strayer, director of the Scientific Discovery through Advanced Computing program in Energy's Office of Science. It will have both vector and scalar processors to handle problems suited to either architecture.
Starting with a Cray X1 platform from Cray Inc. of Seattle, now at Oak Ridge, the lab will upgrade it to 20 TFLOPS. An additional 20 TFLOPS will be added in 2005, followed by a 100-TFLOPS Cray system in 2006.
IBM Corp. will supply its Blue Gene servers for a 5-TFLOPS portion at Argonne National Laboratory in Illinois, which is partnering with Oak Ridge on the effort.