High-performance computing program tests the leading edge

High-performance computing program tests the leading edge

by Patricia Daukantas

Over the last six years, Defense Department laboratories have seen a 50-fold increase in computing capacity, thanks to rapid hardware and software development, said Cray J. Henry, director of the Defense Department's High-Performance Computing Modernization Program.

Henry joined several speakers from federal labs at a recent Silver Spring, Md., forum highlighting SGI's new high-end Origin 3000 server, which drew several government orders before its official release [GCN, Aug. 7, Page 8].

Henry said discussion of hardware statistics should take a backseat to the DOD program's main mission of supporting the warfighter.

'It's not the systems that are important, it's what scientists do with the systems,' he said.

Complex computations

The modernization program encompasses four full-service Major Shared Resource Centers, which house DOD's largest supercomputers, and 17 distributed centers that focus on smaller projects.

One resource center, the Naval Oceanographic Office in Mississippi [GCN, Aug. 14, Page 8] is currently producing more than 7T of data each month, Henry said.

Henry showed several visualizations of simulations run on DOD computers. Many shared a theme of analyzing fluid flow over complex surfaces, which is important in weapons systems design.

Another piece of DOD's high-performance computing effort is ocean modeling, which has commercial applications for the development of low-cost shipping routes, Henry said. Computational scientists also model groundwater flow to help clean up environmentally distressed DOD sites.

Under the modernization program, scientists conduct basic research in materials for weapons and transportation systems, Henry said.

Some calculations examine the behavior of materials at the atomic and molecular levels.

Advances in clustering technology and application software have driven high-performance computing lately, said Stan Posey, SGI's director of manufacturing industry marketing.

Researchers are getting better at making large numbers of processors work together as one system and developing software that efficiently uses multiple processors, he said.

For example, a particular automotive crash simulation has gotten 14 times faster and 306 times cheaper than it was a few years ago, Posey said.

Henry Dardy, chief scientist at the Naval Research Laboratory's Center for Computational Science, said his lab acts as a 'technology insertion site' that works with academic and commercial partners.

'We need to be always at the leading edge, and supercomputing is how we get to that leading edge,' he said.

The Washington naval lab evaluates large-scale computing and networking applications for DOD, Dardy said.

His lab has been using a 32-processor SGI Origin 3000 for a couple of months and will start experimenting with 64-bit Intel Itanium processors later this year.

The Army Research Laboratory in Aberdeen, Md., recently announced it that is acquiring an SGI Origin 3000 with 768 400-MHz Mips processors, plus a 512-processor IBM RS/6000 SP system.

Army lab officials will add the supercomputers to a high-performance pool with a total peak capacity of 2 trillion floating-point operations per second [GCN, June 12, Page 1].

inside gcn

  • open doors to cloud (Sergey Nivens/Shutterstock.com)

    New vendors join FedRAMP Connect

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above