DOD advances supercomputing benchmarks
- By Joab Jackson
- Nov 22, 2004
The Defense Department's High Performance Computing Modernization Program is developing a new technical evaluation process to help estimate how well a potential system will perform, according to Cray Henry, a manager of the DOD program.
Henry spoke at a recent meeting of the Beowulf Users Group sponsored by the information technology sector of Northrop Grumman Corp., Los Angeles.
The center's approach could be used by other agencies to build 'more tailored systems,' said Michael Fitzmaurice, organizer of the Beowulf Users Group and a manager of open source solutions at Northrop Grumman.
The technique evaluates how well a system performs by having the vendor run a sample of Defense applications on a prototype of the machine, Henry said. The resulting performance metrics are weighed against more standard benchmarks, as well as price considerations.
At present, the modernization program spends between $35 million and $60 million a year on new systems, usually upgrading two systems a year that reside in one the four Defense Department Major Shared Resource Centers. About 4,500 military personnel use the systems for approximately 560 projects. The deputy undersecretary of Defense for science and technology oversees the program.
The center began to reassess its evaluation system in 2000, Henry said. Clusters of commodity computers offered a lower cost alternative to traditional supercomputers, so the agency needed a way to compare the two types of systems.
Henry said the team decided to develop a test comprising some of the applications that are run on the program's existing computers. A proposed system should run the application at least twice as fast as the test system used as the benchmark, an IBM Regatta P4 system now run by the Naval Oceanographic Office.
'We run our codes on that system and say that is a reference type. We want people to bid things to us at that type,' Henry said. The program's users tend to run larger Fortran-based mathematical applications such as Aero, which measures elasticity of airplane parts, and Hycom, a modeling program for ocean circulation.
The application benchmarks performed by the bidding vendors are weighed against a series of standard industry benchmarks that measure I/O speeds, operating system performance, memory, and network and CPU speeds of individual systems. These performance metrics are then weighed against the price of each system. The center also looks at a number of qualitative factors, such how comfortable the administrators are working on a particular system.
It takes an average of nine months to procure a new process, from assessing computational requirements to making the final purchase through the General Services Administration, Henry said
'This is the first attempt to measure and quantify performance relative to different systems,' Fitzmaurice said. 'Essentially, what happens [now] is people go out and buy computers by the gigaflop. They don't measure the specific codes.'
Joab Jackson is the senior technology editor for Government Computer News.