HP Apollo 8000 water-cooled system packs more power, less heat for data centers

Water-cooled system packs more power, less heat for data centers

Hewlett-Packard has introduced a new supercomputer server system cooled by warm water. HP describes its Apollo 8000 as “the world's first water-cooled supercomputer with dry-disconnect servers, delivering liquid cooling without the risk.”

The technology is based on work HP did with the National Renewable Energy Lab (NREL)  on the Peregrine supercomputer.

Water cooling is 1,000 times more efficient than air. According to HP, compared to air-cooled systems the Apollo 8000 can provide up to 4 times more teraflops per sq ft, 40 percent more FLOPS/watt and save more than 3,800 tons of CO2 annually. Antonio Neri, HP’s head of servers and networking, said the supercomputer requires 28 percent less energy than air-cooled systems, according to a Wall Street Journal blog.

The system uses sealed heat pipes to circulate water past the cores and is paired with an HVAC power distribution system and an iCDU (cooling) rack for high efficiency. Because of its cooling power, the system can pack more servers in a smaller space than traditional systems – up to 144 servers per rack.  

The massive amount of space and energy required to power supercomputers has been a limiting factor in growing processing power.  In fact, utility and energy provider ComEd reports that cooling equipment used to remove heat in data centers accounts for nearly 45 percent of a data center’s energy costs.

“The HPC world has hit a wall in regard to its goal of achieving Exascale systems by 2018,”said Peter ffoulkes, research director at 451 Research, in a Scientific Computing article. “To reach Exascale would require a machine 30 times faster. If such a machine could be built with today’s technology it would require an energy supply equivalent to a nuclear power station to feed it. This is clearly not practical.”

Jim Ganthier, HP Server’s  vice president of global marketing concurs. "The present course is unsustainable. If you continue the present course/speed over the next five years, you would need a gigawatt of power, the output of Hoover Dam, and 30 football fields to accommodate the space requirements,” he told CRN.

Today supercomputers are primarily used by government research organizations, such as the Department of Energy laboratories, and corporations that require massive computing power for complex calculations.

However, there is an ever increasing need across all sectors for increased processing power as the growth of big data continues. Recently Lawrence Livermore National Lab announced it is making its Catalyst supercomputing cluster available to industry, universities and other collaborators to test big data technologies, architectures and applications. 

Other providers, including IBM, also sell water-cooled supercomputers, and the technology is spreading. According to The Register, some of the world's top 500 supercomputers may start using the technology by mid-2015.

One of the secondary benefits to liquid cooling of high-performance computers is that the process also allows data centers to use the heat transferred to the water to heat office space and laboratories – improving waste-heat recovery and reducing water consumption in the data center.

The Peregrine system at NREL’s Golden, Colo., Energy Systems Integration Facility uses water that’s 75 degrees Fahrenheit for cooling.   “That temperate allows us to cool the data center effectively, without compromising the IT equipment, without any chillers,” said Steve Hammond, director of Computational Sciences, NREL. When the water is returned from Peregrine, it’s about  95 degrees, which creates a ready-made source of heat for the facility.

NREL expects to save $1 million per year in operations costs – $800,000 in server cooling costs and $200,000 in building heating costs – by using HP’s warm water cooling technology, according to a HP case study on the project.

The National Security Agency is going one step further in data center energy conservation. It will be using wastewater to cool its servers at its data center in Fort Meade, Md. Up to five million gallons a day of treated wastewater, also known as graywater, will be used for cooling systems at the data center, due to open in 2016.


  • Records management: Look beyond the NARA mandates

    Pandemic tests electronic records management

    Between the rush enable more virtual collaboration, stalled digitization of archived records and managing records that reside in datasets, records management executives are sorting through new challenges.

  • boy learning at home (Travelpixs/Shutterstock.com)

    Tucson’s community wireless bridges the digital divide

    The city built cell sites at government-owned facilities such as fire departments and libraries that were already connected to Tucson’s existing fiber backbone.

Stay Connected