Throwing off heat
New systems direct airflow to keep high-density data centers cool<@VM>Checklist: Data center cooling
The predictions of information technology analysts don't always come true. But Gartner Group was on target when it predicted two years ago that half the world's data centers would run out of power by the end of 2008 because of dense computing architectures and the need to remove all the heat they generate. The Energy Department's Stanford Linear Accelerator Center (SLAC) in Menlo Park, Calif., reached that point in mid-2007.
'The data center was built about 30 years ago for water-cooled mainframes, and we have filled it up with rack-mount, high-density Linux systems,' said Chuck Boeheim, SLAC's assistant director for computing. 'The computing capacity needs to at least double every 18 months to keep up with the volume of research data, but we didn't have enough power or cooling coming into the building to be able to do any more expansion.'The box outside
He was able to meet this year's computing requirements by installing a Sun Microsystems Project Blackbox ' a data center in a shipping container ' on a concrete pad outside the main building. He hooked the Blackbox into an electrical substation, installed a waterchiller unit, and connected the box's 252 Sun x220 M2 quad-core servers to an existing cluster and storage inside the building. The box was delivered in July and went live Sept. 17.
That proved to be a viable solution for SLAC's cooling needs, but most agencies need to improve the cooling within existing buildings or design the most efficient systems for new facilities. It is a huge task. Organizations in the United States spent about $2.7 billion for data center electricity in 2005 ' about half of it on cooling ' and DOE said 10 percent of that energy was used by federal data centers. Executive Order 13423, 'Strengthening Federal Environmental, Energy, and Transportation Management,' issued by the White House in January 2007, directs agencies to reduce their power consumption by 30 percent by fiscal 2015.
Here are some strategies that can help you design or purchase a data center cooling system that is energy-efficient but also meets the needs of high-density server architectures.Hot topic
The basic function of any data center cooling system is to transfer heat from the processor outside the building. This traditionally has been done by chilling the air in the entire room, but that approach is no longer adequate.
'With high-density and blade server deployment, we are experiencing increased cooling capacity challenges,' said Fred Porzio, project leader at the Defense Communications and Army Transmission Systems at Fort Monmouth, N.J.
During the past few years, data center power consumption per square foot has skyrocketed.
'It used to be that 50 watts per square foot was very dense,' said Vali Sorrell, a mechanical engineer at Syska Hennessy Group, which specializes in cooling. 'Now we are seeing 200 to 300 watts per square foot, and there is talk of going to even more dense configurations.'
This leads to two problems. The first is that total heat production in the data center rises, requiring a larger cooling system. The other is the difficulty of getting cooling exactly where it is needed to handle hot spots when individual racks are running in the dozens of kilowatts. Both problems can be addressed by improving airflow or using liquid cooling.
Many data center cooling problems can be addressed simply by adopting procedures to help the cool air find its way to the processors. The low-hanging fruit includes sealing cable cutoffs in the raised floor, removing under-floor obstructions and sealing the gaps between server racks. Sorrell said hot spots can often be solved simply by following industry best practices rather than spending $35,000 to do a computational fluid dynamics study of airflow. When necessary, greater control of airflow can be achieved by sealing cabinets and ducting air directly into them. Sorrell also advised switching from a raised-floor cooling system to one with overhead ducts.
'You can design it to deliver the same amount of cooling with less air than with under-floor cooling,' he said. Also, since cold air sinks, it does a better job of cooling all the servers in a rack. With raised-floor cooling, sometimes the cold air never makes it to the servers at the top of the rack.
To make sure the air is getting where it is needed, Sorrell advised placing a large number of temperature sensors around the data center to ensure the server inlet air is at the optimum temperature.
With proper airflow, you don't need to waste money overcooling the air ' the American Society of Heating, Refrigerating and Air-Conditioning Engineers recommends that server inlet air be between 68 degrees and 77 degrees. The room air doesn't need to be in the 50s. Even with improved airflow, certain parts of the data center could require supplemental cooling.
'When there are tremendous heat loads, we have to get rid of the heat more efficiently, which means getting the cooling closer to where the heat is generated,' said Robert Mc- Farlane, principal at Shen Milsom and Wilke, a high-tech consulting firm. 'The only way to do that is to have something running in the cabinet or through the device itself that removes the heat. At the moment, the only two things that do that are water or refrigerants, which is why we are seeing more direct-cooling devices in the data center.'
Hewlett-Packard, IBM, Rackable Systems and other vendors have rackmounted, liquid-cooling systems for their servers, as do infrastructure vendors American Power Conversion and Liebert.
Porzio recently used APC's InfraStruXure system with rackmounted cooling units when he overhauled a data center in South Korea.
The in-row heating, ventilation and air conditioning units efficiently provide airflow to cool where it is needed, while traditional computer room AC (CRAC) units flood the entire room with cold air, which is inefficient and costly, he said. 'A bonus is that the solution is mobile, giving the Army flexibility to move the HVAC and [uninterruptible power supply] to a new location, should mission requirements change.'
Some people worry about bringing water into the data center, but McFarlane pointed out that liquid is always used. The only difference is where the liquid/air heat exchange occurs ' at the chiller, inside a CRAC unit on the data center wall or directly on the server racks.
In addition to the basic elements of chillers, heat exchangers, ducts and fans, you need to include temperature and humidity sensors and a control system when buying a cooling system.
For simplicity, the control system should integrate with other infrastructure management and reporting systems. You might want to include remote monitoring of the infrastructure as an additional safety measure.
Finally, you may want to consider an outdoor air economizer ' a system that, when the temperature is right, pulls in outside air to cool the data center rather than running the chiller.
This saves electricity and can also act as a backup source of cool air.All together
Depending on needs, one can go with a single vendor or assemble the pieces from several. Liebert and APC both have a full array of power and cooling products ranging from a complete data center down to a sigle rack. When building a data center, it may be possible to satisfy all your needs from a single source. When addressing hot spots in an existing room or bringing in new equipment, you may also want to explore other sources, including the in-rack cooling systems from the server vendors.
Figuring out cooling requirements can be a complex undertaking. Here are some of the questions you'll want to answer.
' What are the current and projected total cooling requirements for the data center?
' Is there a supply of chilled water available from building facilities to adequately cool the data center, or will the data center need to provide its own? Is the supply adequate to meet current and future needs?
' Is air cooling enough, or is supplemental cooling needed for certain hot spots?
' Will you bring air in through a raised floor or through the ceiling?
' How will the air temperature and humidity be monitored and controlled?
' Is it viable to use outside air as a primary or supplemental source? If so, what additional filtration and humidification/ dehumidification will be needed?
' How will the environment be sealed to avoid leakage and recirculation between hot and cold sections of the room?
' Does the system meet American Society of Heating, Refrigerating and Air-Conditioning Engineers' temperature and humidity standards?
' Are the chillers, air sensors and controls on the backup power system in case of a blackout? Are there redundant units?
' Does the control system integrate with the rest of the infrastructure, or does it require a separate server and interface?