Rethinking data center efficiency
The data center is evolving—and with it the concept of data center efficiency.
It’s not just that agencies are under pressure to consolidate and reduce the cost of their data center operations. They also are dealing with increasing demands for the applications and services they provide.
The old notion of a data center -- a building full of servers that can be expanded at will as the demand to host applications increased -- is gone for good. The new environment, defined by perpetual budget constraints but still increasing user needs, requires data centers that deliver more computing power but also cost less to run and take up less space.
The term data center efficiency has stayed the same, but it’s the technologies to drive those efficiencies that have changed, said Dennis Tolliver, enterprise server channel sales specialist at Hewlett-Packard.
“As the technology has developed, other things that we didn’t have the means to interpret 10 years ago, such as Power Usage Efficiency (PUE), have cropped up,” he said.
The Federal Data Center Consolidation Initiative (FDCCI), launched by the Office of Management and Budget in 2010, sets the standard for what is expected from government agencies. With a goal of closing at least 40 percent of the government’s 3,133 data centers by the end of 2015, the focus now has to be on “computing power and density instead of capacity,” according to federal CIO Steven VanRoekel.
Along with that comes a need for agencies to wrap their arms around a number of different metrics for cost savings. Cutting energy use has been a focus for some years and power usage effectiveness (PUE), the ratio of the total amount of power used in the data center to the power used just by the IT equipment, is the most common metric used to measure data center efficiency. Now, the FDCCI mandate requires agencies to also include such things as the reduction in floor area and server rack counts in their overall cost metrics.
The availability of new technologies such as virtualization, blade servers and high-density storage is helping to change the measure of data center efficiencies. Rich Campbell, federal chief technologist at EMC Corp., thinks that the equation has shifted from much compute-power-per-square-foot can be thrown at an application to how much compute power, storage and network capacity is needed for each application.
“Compared to ten years ago, I can put the same amount of processing power into maybe a third of the space,” he said. “That changes the efficiency model, which is no longer based on compute-power-per-square-foot but on compute-resources-per-rack-unit.”
And it turns out that when it comes to data center space efficiency — a result of the focus on density that VanRoekel talked about — organizations generally have not done a good job with that, despite the fact that most of them have made it a priority to get more out of their available floor space.
“It turns out there’s a lot more room available than we thought,” said David Cappuccio, a managing vice president responsible for data center research at Gartner. “In both the private and public sectors, for example, they don’t come close to filling up the server racks. They’re only around 60 percent full, on average.”
Beyond any mandates to make their data centers as efficient as possible, agencies could also have a financial incentive to do so. Faisal Iqbal, manager for systems engineering in Citrix Systems’ public sector, said there are already “very forward thinking folks” at agencies who are looking to create data centers that can house several agencies and share capacity on as as-needed basis.
“Their thinking is that, if they can build a data center and make it so efficient that they’ll have extra capacity, then they can share that and charge for it,” he said. “That makes the data center a much cheaper proposition than if the agency is just operating it for itself.”