Air flow control can yield more efficient data centers

Projects explore new techniques for improving air distribution in data centers

Cool data centerYou don’t necessarily have to spend money to save money on data center power use. The Energy Department’s Lawrence Berkeley National Laboratory (LBNL) is engaged in several projects with industry partners to demonstrate how cooling and information technology systems can work together to more effectively manage air flow in data centers, thus improving energy efficiency.

Editor's Note

This story is part of an 1105 Government Information Group special report on green IT. GCN's affiliate publications, Federal Computer Week and Washington Technology, will explore the implications of the 2005 Energy Policy Act and examine the business case for green IT, respectively. You can find the complete 1105 GIG report here.

One project, with Intel, IBM and Hewlett-Packard, funded by the California Energy Commission, will explore the possibility of using temperature sensors that are already inside servers to directly control the computer room air conditioning (CRAC) units that regulate the flow of cool and hot air into and out of the data center.

The idea is to ensure that the right amount of cool air is being delivered to the server inlet. CRAC systems in most data centers typically focus on cooling the entire room, but that can result in uneven and inefficient distribution.

Additionally, LBNL is working on a separate demonstration to show how data center managers can use wireless temperature sensors to directly control computer room air handlers, which push air into ducts, said William Tschudi, project leader of LBNL’s Building Technologies Department.

“The idea is you have a finer mesh of being able to monitor temperature and then control the computer room air handlers to give [the facility] exactly what it wants rather than oversupplying air,” Tschudi said.

“We’re also working with Sun Microsystems on the demonstration of different cooling technologies," he said. "All companies are trying to demonstrate different pieces” to improve energy efficiency. The results of some of these demonstrations will be shared with people attending the Silicon Valley Leadership Group’s conference in the fall, he added.

Demonstrations on air management and cooling techniques are just part of government and industry efforts to advance innovation and spur greater energy efficiency in data centers. The Environmental Protection Agency is leading efforts to establish an Energy Star specification for enterprise servers so IT managers can buy systems that deliver performance but reduce energy consumption. EPA also is working on an Energy Star rating for data centers with the Green Grid, a consortium of industry and government organizations.

But measuring energy efficiency in data centers could be a tougher nut to crack, experts say.

A view of two networks

The LBNL and Intel demonstration is slated to happen by the summer.

“Right now we’re working on how to get the IT network to work with the building control network,” Tschudi said.

Those two networks are separated, but the LBNL/Intel team is developing a management console that will give data and facility managers a view of both networks, said Michael Patterson, senior power and thermal architect at Intel’s Eco Technology Program Office.

The demonstration is being conducted at an Intel data center in California, which has IBM and HP servers and Liebert CRAC units, Patterson said.

The goal is not to develop a product, Patterson added. Because the California Energy Commission is funding the project, the goal is to document the results of the demonstration so data center and facility operators can learn from the team’s efforts.

“They can learn what the challenges are, how we did the interconnection and what some of the tricky bits were so if they want [they can] implement the same control strategy into their data center,” Patterson said. “So they can go into it smart rather than blindly and hoping for success.”

Blowing hot and cold

CRAC systems in most data centers pump pressurized air to maintain a server inlet temperature within a proper range. The American Society of Heating, Refrigerating and Air Conditioning Engineers recommends inlet temperatures of 64.4 degrees Fahrenheit through 80.6 degrees Fahrenheit. ASHRAE also recommends an absolute humidity/dew point range of 41.9 to 59 degrees Fahrenheit.

The cooling units are positioned around the perimeter of a standard data center. They have a couple of different features. CRAC units receive chilled water from the buildings’ central utility plant, or the facility has localized air-conditioning plants with a CRAC unit in each, Patterson said.

There is a cooling component that takes heat out and, at the same time, there is an air flow component. Motorized fans in the unit move the cool air around the room, usually beneath raised floors and up through perforated tiles to servers mounted in racks. The hot air is blown out of servers, usually to hot aisles and returned to the cooling unit.

During the era of mainframe computers, there was no need for air flow segregation. A lot of cold air was dumped into the room and the computer released heat back in the room, which was sent to the cooling unit.

Ultimately, managers shouldn’t be concerned with the temperature returning to the CRAC unit, Patterson said. What really matters is the inlet temperature to the server.

“To maximize efficiency you want to have just enough air flow and just enough cooling through chilled water or the refrigeration system in the CRAC,” Patterson said. “You can’t get this balance with the temperature sensor in the return air to the CRAC unit.”

However, you can if you tap into the temperature at the inlet of the server. Most server manufacturers put a front panel temperature sensor in their systems that reads the temperature of the air coming into the server, Patterson explained.

“If we can control that temperature and provide the front of the server with enough air flow, then we will have done our job to provide the most efficient cooling possible,” he said.

Essentially, the demonstration is intended to show how data center and facility operators can replace the control functionality of the cooling system with instrumentation that is already in the servers.

“We’re not saying add extra sensors or redesign servers or spend additional money when a new data center is spun out. The beauty of the project is that we are demonstrating the integration of the facility and the computer, providing a wall between them,” Patterson said.

Thermo map

The building control system will be able to communicate with the management server that monitors systems for hard drive failures or memory upgrades and seek information on the front panel temperatures. The team is deploying some complex algorithms that will allow the sensors to tell the cooling system if the air is cold enough and that will drive the chill water pump, Patterson said.

The team also will use sensors to measure the temperature at the bottom and top of the server rack to determine if there is enough air flow. Too little air flow means a large temperature differential between the bottom and top.

“With this thermo map of server inlets, we are going to have the control system be smart to modulate the whole load to significantly reduce the amount of energy we’re going to be using in the data center,” Patterson said.

The project team is expecting a more than 70 percent reduction in energy use in the particular cooling units, he said. Most data centers run the fans in the cooling systems at 100 percent all the time.

“We only need 47 percent of the peak air flow on the average, so we’re going to only use 10 percent of the power compared to if these [cooling system fans] were turned on to run at full speed,” Patterson said.

Benchmarking data centers

There is no silver bullet for improving energy efficiency in data centers, LBNL’s Tschudi said. A lot of areas interact with one another, and improvements can be made in power conversion and distribution, load management, server innovation and cooling equipment, he said.

But coming up with metrics to benchmark those improvements could be difficult, some industry experts say.

“We suspect the federal government is the largest operator of data centers probably in the world,” said Andrew Fanara, the EPA Energy Star product development team lead. As such, the opportunity is there for the federal sector to lead the way in improving data center operations, he said.

However, there has to be a way for data center operators to benchmark performance against the entire facility and measure against themselves over time to improve their efficiency, he added.

EPA has worked with various types of facility managers to come up with Energy Star ratings for facilities from schools to supermarkets. So EPA decided to design a benchmark specifically for data centers, whether they are in a stand-alone facility or inside another commercial office building. The agency is working with the Green Grid to fine-tune that protocol, Fanara said. It will provide advice to data center operators on measuring the performance and energy efficiency of IT equipment.

“Unless you have the means to measure your performance, how do you know the investments are taking you in the right direction?” Fanara asked.

At the end of the research and analysis stage, EPA could have an Energy Star benchmark for data centers, though the analysis isn’t finished, he said.

So far, the Green Grid has proposed the Power Usage Effectiveness (PUE) and its reciprocal Data Center Infrastructure Efficiency (DCiE) benchmarks that compare an organization's data center infrastructure to its existing IT load.

After initial benchmarking using the PUE/DCiE metrics, data center operators have an efficiency score. They can then set up a testing framework for the facility to repeat and can compare initial and subsequent scores to gauge the impact of ongoing energy efficiency efforts.

DOE has also developed the Data Center Energy Profiler tool, which offers a first step to help companies and government agencies identify potential savings and reduce environmental emissions associated with energy production and use.

DC Pro, an online tool, provides a customized overview of energy purchases, data center energy use, savings potential and a list of actions that can be taken to realize these savings.

Microsoft has developed a suite of sophisticated reporting tools to measure efficiency in its own data centers, said Kim Nelson, executive director of e-government at Microsoft.

The company uses the Green Grid’s PUE but has its own tools — based on business intelligence capabilities — that measure server utilization and CPU and wattage usage per server.

“We measure PUE and carbon emission factors that are generated by where you live geographically," she said. "We’ve been reporting those to EPA.”

Because EPA collects information from different organizations and agencies, Microsoft will need to evaluate whether the information it has given to the agency can be reasonably collected across the board.

IBM officials also have collected information on energy efficiency in the company's data centers and sent it to EPA for consideration and analysis.

“We provided a year’s worth of data on six of our data center buildings to EPA as part of their data collection process for the Energy Star building work for data centers,” said Jay Dietrich, program manager for IBM’s corporate environmental affairs group.

There is no meaningful metric for measuring workload at this point, Dietrich said. Data center operators can be very efficient with facilities power and IT power, but if they are not optimizing the amount of work their servers are doing, those metrics might not be the most efficient answer to a particular application, he said.

For now, EPA is just going to get data on IT power and the power needed to run the facility, he said. But the agency is interested in exploring how to introduce that workload component, as is the Green Grid, Dietrich said.

Many data centers don’t have sufficient instrumentation to get the information they need to come up with some measure of efficiency in the data center, Intel’s Patterson said, adding that the company is working to promote a minimum level of instrumentation.

However, “you can’t wait until you have the right instrumentation suite out there. You may never actually start,” Patterson said.

Data center managers can still go around with a clipboard and write things down, Patterson said. “If you don’t measure, you can’t improve and you don’t know where to focus your effort for improvement,” he added.

inside gcn

  • When cybersecurity capabilities are paid for, but untapped

Reader Comments

Sun, Sep 29, 2013 robert r midwest

APC rack would have the exact same issue if you had blade servers and 1Us in the same rack. Each "rack" should have a cool air supply similar to a pneumatic air hose line and tapping and directing the supply to feed server/router/chassis air intakes should be easy. The only reason cool air aisles exist is that no one can "shoot" cool air directly where needed so they kind of just blow it in the back or front or side or bottom. You can run power and data cords to servers, why can't you run cold air ?

Mon, Oct 5, 2009

You sound like a bunch of salesman to me. If you have some time and money we can do some real experiments proving that it will take much more energy and resources to cool a room with these new systems then with the old traditional raised floor, hot and cold isles. Not counting all the resources and energy it takes to try and reinvent the wheel.

Wed, Apr 15, 2009

I agree with the other comments. The study is another prime example of the Government wasting the tax payers money. Why aren't they looking at new designs? Green IT is in the same category as global warming. Just another huge hype that someone is using to generate masses of money for contractors, robbing the public of funds needed elsewhere all of which provides absolutely nothing in the end - no benefit at all.

Tue, Apr 7, 2009

This test evaluation sounds fine, when viewing things through the standard method of cooling Data Centers. Let’s put a bunch of CRAC units around the perimeter, and then let try something new, let’s let the servers decide how much cooling is required. Sounds great ... if you have all the same type of servers, in all the same types of cabinet and racks, and they are all duct feed or all equal distant from the airflow outlet of the CRAC. However, what happens if there are 2 blade servers in a cabinet that also have 4 small traditional servers? The blade servers are going to throw off a tremendous amount of heat, as much as 4000 to 6000 BTUs while the small server are only producing 400 BTUs each. What happens if the blade servers also happen to be the furthest away from the CRAC unit? These servers will tell the CRAC that it needs more cooling and the CRAC will produce more, but it is doing so only to feed cooling to individual blade servers and in turn, is over-cooling everything in between. Reverse things, and allow the small traditional server close to the CRAC to decide the outlet temperature outlet, and the blade servers will over heat. These are all problems associated with the standard Data Center design. Coming up with reporting methods and integration between components and diverse systems is a wonderful idea that should be applauded. However, in the end, you will still be stuck with the same problems as before. The basic flaw in the system is the standard design itself. The design just doesn't have enough flexibility in it to cover all possible combination for heat dispersal based on location of the source, location of the coolant and the amount of cooling required. Which brings the question - why wasn’t a more flexible and viable design, one current available on the open market considered? Seriously, I was astonished to read that there was no mention at all of APC's InfraStruXure Data Center System. APC's system has multiple types of cooling methods. Everything from CRAC units that sit right beside the equipment cabinets and that directly cool the equipment in that individual cabinet or can cool a whole row of interconnected cabinets, to additional rack mounted cooling units that can cool and redirect airflow within the equipment cabinet itself. This is a forward-looking design, one that answers all of the questions about source, destination and cooling required. This is one solution that completely redesigns the traditional system. Methods like this are what should be under consideration and are what should be under testing. What needs to happen is a complete rework and rethink of the box and its components, as a new design. Don't just provide the same old box with a new paint job and a bunch of gee-wiz gizmos attached to it.

Tue, Apr 7, 2009 Dave Donovan

This is a total waste of time and money.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group