The means to go green

COVER STORY: How to eke out energy savings in servers, processors, networks and storage systems<@VM>Sidebar | Supercomputing: Low and slow will be the way to go<@VM>Sidebar | Industry cuts power use at its own data centers



Editor's note: This report is part of a broader 360-degree joint reporting effort by Government Computer News, Federal Computer Week, Washington Technology and the 1105 Government Information Group. GCN covers the technology developments propelling green IT, FCW focuses on the policy and management aspects of green IT and Washington Technology looks at its effect on contractors and suppliers. The full collection of stories are available here.


Want to make your data center more environmentally friendly? Or just cut the power bills for your agency? The path to either goal is the same: Greater energy efficiency.

With the general public becoming more aware of energy efficiency and with the cost of the kilowatt hour creeping up, the government data center certainly could stand to sharpen its energy usage profile.

For one thing, President Bush has mandated reductions in energy usage. In January, the White House issued an executive order calling for each agency to improve energy efficiency and reduce greenhouse gas emissions either by 3 percent annually through the end of fiscal year 2015 or 30 percent by the end of fiscal 2015, depending on its current energy use profile (GCN.com/872).

Data centers would be a good place to start. They can be as much as 40 times more energy-intensive as conventional office buildings, according to a study by Lawrence Berkeley National Laboratories for the American Council for an Energy-Efficient Economy.

And a lot of that power is consumed by information technology equipment. Server sprawl is taking up precious data center space and consuming a lot of power, resulting in high utility bills.

The combination of memory, disks and network interfaces can exceed the power consumption of a CPU.

Plus, excess hardware capacity can lead to significant energy waste.

The good news is that many technologies commercially available or soon to be available could improve the energy efficiency of data centers.

Advances in virtualization technology allow data centers to pool multiple applications, servers and storage into a single source of shared resources, saving space and power. Multicore processors offer better power management and can handle more workload in parallel than a single-core chip.

Ongoing work in national laboratories and standards bodies will pave the way for more energy-efficient Ethernet networks and other networking equipment.

Meanwhile, storage devices are expected to become more efficient because of a shift to smaller hard drives and increased use of Serial Advanced Technology Attachment drives, according to a report issued to Congress on server and data center energy efficiency by the Environmental Protection Agency and industry stakeholders.

And improved management of storage resources could foster significant data center energy savings.

Here is how manufacturers and researchers are developing ways to boost energy efficiency across all the major data center components, including servers, microprocessors, networking and storage.

Servers: Utilize more, cool better

The underuse of servers is often cited as a reason for sub par energy efficiency in data centers. Efforts to get more out of existing servers could have a significant effect on energy savings in many U.S. data centers and server installations, experts say.

'Servers aren't fully utilized,' said Joe Wagner, senior vice president and general manager of system resources at Novell. The typical volume server runs at between 15 percent and 30 percent of use, compared with 70 percent to 80 percent on a mainframe system, he said.

Virtualization is one way to pool and share resources to reduce costs and optimize utilization. Users can virtually collapse workloads, Wagner said. For instance, they can merge three single servers, each running at 15 percent capacity, into one server running at 45 percent to 50 percent capacity.

Novell's SUSE Linux Enterprise Server 10 with built-in Xen open-source virtualization software lets users consolidate Microsoft Windows and Linux workloads onto a single server, Wagner said. This can reduce power, cooling and space requirements.

Although virtualization also adds a new layer of complexity, Novell's ZENworks Orchestrator and virtual machine management software provides an automated, policy-based solution that can simplify virtualization operations. It can also boost energy efficiency by shutting down machines when they're not in use in addition to distributing virtual and physical workloads across the data center for maximum efficiency.

Meanwhile, major computer manufacturers are moving toward the production and marketing of more energy-efficient servers.

Several key features are the use of multicore processors with power management and virtualization capabilities, high-efficiency power supplies, and internal variable-speed fans for on-demand cooling, the EPA report to Congress states.

Dell, for example, has incorporated the Energy Smart technology used in its desktop computers into Dell PowerEdge Servers to decrease power consumption and overall operating costs.

Dell PowerEdge Servers also can work with Emerson Network Power's Liebert XD and DS, two cooling modules that use advances in refrigerants and compressors to improve the energy efficiency of the cooling process.

In developing Energy Smart, Dell took a close look at its own data center to determine which equipment was consuming the most power, said Jon Weisblatt, senior manager of solutions marketing at Dell. 'The majority of that was IT equipment.' Sixty percent of the IT power consumption was directly attached to server usage, he said.

EPA's Energy Star program has focused on data centers by supporting development of energy performance measures for servers.

'Energy efficiency is the cornerstone of what you can do to make things greener,' said Jack Pouchet, director of green initiatives at Emerson Network Power. 'Data center managers need to assess where they are today. If not, they have no idea where they are going.'

The Energy Department is working with other industry stakeholders such as the Green Grid consortium to develop assessment tools within the next 18 months.

DOE has assembled the expertise to develop metrics, measurements and tools with the goal of empowering data center decision-makers, said Paul Scheihing, who works at DOE's Office of Energy Efficiency and Renewable Energy in the Industrial Technology Program.

DOE is interested in an overarching set of tools that will help data centers profile energy use and gather and quantify metrics, he said.

Processors: More cores

The two big commodity chip-makers, Advanced Micro Devices and Intel, have been developing and improving multicore chips. Further energy savings can be attributed to the development of dynamic frequency and voltage scaling in addition to virtualization capabilities, experts said.

Multicore processors contain two or more processing cores on a single die, which run at slower clock speeds and lower voltages than the cores in single-core chips but can handle more work than a single-core chip.

For example, AMD's Quad Core Opteron processor with Direct Connect Architecture provides fast input/output throughput by directly connecting input/output to the CPU, said Rick Indyke, federal business development manager at AMD.

An integrated memory controller decreases power by removing external memory controller requirements. AMD PowerNow technology with Optimized Power Management dynamically reduces processor power based on workload, giving users power savings of as much as 75 percent, AMD officials said.

The Quad Core Opteron also offers advanced Silicon-on-Insulator technology for faster transistors and reduced power leakage.

AMD Virtualization technology, which is hardware-based, lets virtualization software run multiple operating systems and applications on a single physical AMD Opteron processor-based server.

Earlier this month, Intel launched the new Quad-Core Intel Xeon processors using 45- nanometer technology that offers reduced idle power levels to maximize efficiency. It does this through a combination of 45-nm low-leakage and system-transparent energy smart technology.

A reduction in a processor's idle power usage helps to lower average server power consumption over time during normal server operation, said Nigel Ballard, government marketing manager at Intel.

IntelVT FlexPriority, a new VT extension available in the latest Intel Xeon processors, optimizes virtualization software by improving interrupt handling. Intel claims it can boost virtualization performance by as much as 35 percent for 32-bit guests.

Networking: Only what you need

Servers are not the only components in the data center that draw power. Three efforts are under way at the Lawrence Berkeley National Laboratory (LBNL) to make Ethernet networks more energy efficient.

Adaptive Link Rate technology, or Energy Efficient Ethernet, focuses on letting Ethernet data links adjust their speed ' and power ' to traffic levels, said Bruce Nordman, a researcher in the lab's Environmental Energy Technologies Department.

Ethernet links do not vary the rate at which data is transmitted even if little data is moving along the link. Higher data rates require a lot of power, so more energy is being used to transmit small amounts of data, LBNL researchers said.

Some computers can change the speed of a link when they are in sleep mode or turned off, but the process is too slow when they are idle or active.

So the solution is to change the network link speeds quickly in response to the amount of data being transmitted.

LBNL is working with the Ethernet Alliance and the Institute of Electrical and Electronics Engineers' 802 standards committee to develop Adaptive Link Rate into a standard, said Mike Bennett, an LBNL researcher and chairman of the IEEE Energy Efficient Ethernet Task Force.

Another project aims to develop proxying specifications that would let PCs and other devices sleep while other equipment maintains their network presence.

There are many reasons users might need to stay connected to a network while they are not at their desktops. 'I hear of a lot of government agencies where you need to leave your machine on at night to [receive] updates such as security patches,' Nordman said. If the desktop PCs are allowed to stay in sleep mode but still are accessible, data centers could save millions of dollars on energy a year, he added.

A proxy can provide a solution, he said. There are three ways to implement proxy, according to a white paper written by Nordman and University of South Florida Professor Ken Christensen:
  • Self-proxying puts the functionality within hardware, such as a network interface card. 'The key is to not require the power-intensive main processor, memory and most buses to be active during sleep,' the paper states.
  • Switch proxying puts the functionality into the immediately adjacent network switch so that the end device doesn't have to be changed. Other devices on the network are not aware the end device is asleep.
  • Third-party proxying puts the functionality somewhere in the network other than the device or adjacent switch.

It might be good to have proxying referenced as a standard, but it is not a linchpin for moving forward and implementing some of these approaches in products, Nordman said. 'Proxying involves what a device does when it is not on.' However, Adaptive Link Rate focuses on both ends of the Ethernet network, he added, so for that to operate successfully, there has to be a standard approach for industry.

The third project LBNL is working on would establish energy efficiency specifications to help manufacturers develop and users buy network equipment that consumes less electricity.

Storage: Get smart

A lot of energy-saving effort understandably goes to hardware. But for some observers, green IT begins with green data.

The ever-increasing volume of data in storage systems could make these devices the top power hogs in the data center, said Jon Toigo, chairman at the Data Management Institute and founder of the Green Data Project. He noted that research firm IDC projects a 300 percent increase in storage devices purchased between 2006 and 2010.

But there are technology and management strategies for saving power on storage ' such as storage virtualization, data deduplication, storage tiering and moving archival data to storage devices that can be shut down when not in use ' said Sateesh Narahari, senior manager of marketing at Symantec. The company offers a handful of products to manage and use storage more efficiently, such as Symantec NetBackup, Symantec Veritas Cluster Server and Symantec Veritas CommandCentral Storage.

Still, Toigo questions whether efforts such as data deduplication and storage tiering ' all good initiatives ' are more tactical than strategic.

A strategic approach requires knowing what's on your server drives, he said.

Typically, about 40 percent of the data is inert; 30 percent is well used; 15 percent is allocated but unused; and 10 percent is orphaned, meaning the owners of the data are no longer with the organization; and 5 percent is inappropriate, Toigo said. The figures come from a study he conducted with Randy Chalfant, Sun Microsystems' chief technology officer.

Data center managers need to employ intelligent archiving, which gives users more specific information about the content and context of data stored on systems. Then they must deploy storage resource management technology that has thin provisioning functions so they can reclaim unallocated space, Toigo said.

Archiving selectively and intelligently is the strategic approach, he said.

'Intelligent archiving, storage resource management and data hygiene will bring in a lot greener environment than anything on the hardware level,' Toigo said.DOE lab aims to save power by using slower processors, but more of them

LAST MONTH, when the Energy Department's Argonne National Laboratory made one of its smaller supercomputer purchases, it went with a largely unknown company, SiCortex. One of the deciding factors was the energy efficiency of SiCortex's SC5832.

'It's a very interesting machine in many ways,' said Ewing Lusk, director of the math and computer science division at the laboratory.

'We see it as a wave of the future in many ways, including the low power.'

Although Argonne has plenty of power on tap, the lab is looking at how to factor energy efficiency into tomorrow's supercomputer work.

Generally speaking, processors aren't getting any faster, so large supercomputing systems such as those being run by Argonne are using more processors and splitting the work across all of them. This approach, however, leads to concerns about power and cooling usage. The heat that processors give off not only is lost energy and requires additional data center cooling ' and even more electricity ' to dissipate the heat. So interest has been high in 'bringing the power budget down by using slower processors that don't get so hot,' Lusk said. The performance penalty from using slower chips is made up by using more of them.

Unlike other cluster vendors, SiCortex has developed its own processor cores, which are packaged six per chip. In recent years, third-party chip fabrication plants and improvements in chip design software have made it easier for new system suppliers to develop their own microprocessors, said John Goodhue, vice president of marketing at the company.

The company didn't develop a cutting edge microprocessor but rather built a simple processor tweaked for fast interconnect communications and low-power usage. The SC5832 units house 5,832 64-bit processors.

Although each one can only perform at 1 GHz ' about half the lowest clock speeds offered by Advanced Micro Devices and Intel, together they can perform 6 trillion floating-point operations/sec (teraflops).

Each node also features interprocessor communications logic, DDR- 2 memory controllers and PCI Express input/output logic.

As for energy usage, the machine is impressive on paper. A node in the SiCortex cluster consumes 15 watts, much less than the 250 watts or so consumed by a typical server in a cluster, the company said. Overall, this rig draws about 20 kilowatts of power.

It's not all good news, though. As Lusk points out, 'Now we have to program the beast.' The SC5832 will execute your typical DOE lab number crunching for fields such as astrophysics, climate modeling and seismic research. Some of the programs that handle these tasks may have to be written to work in parallel processor environments, and 'that is a little bit of a challenge for some applications,' Lusk said.

But that was part of the reason the lab purchased the SC5832, Lusk said. Argonne wanted 'to get people working on that target early.' Earlier this month, the lab contracted with IBM for a much larger, 556-teraflop machine, called BlueGene/P machine, which will have 163,840 processors. The work on SC5832 will help in understanding how to use BlueGene/P as efficiently as possible.


Can agencies learn anything from private-sector energy-efficiency efforts? Industry vendors such as Cisco, Hewlett-Packard, Sun Microsystems and others are struggling with the same power, cooling, and space issues that government agencies have. Several vendors have launched initiatives to consolidate data centers and implement technology that will reduce power consumption, save space and enable data center managers to better utilize processing and storage functions. Here's a snapshot look how three of them are moving toward energy-efficient data centers.

Cisco: Green in Texas

Cisco System's new data center under construction in Richardson, Texas, will be as green as company officials can make it.

Richardson was chosen as a site because it is relatively free from potential natural disasters, said Steve Picot, Cisco's regional manager for Federal Data Center Solutions. The company has two primary data centers now ' one in California in the heart of earthquake country and the other in Florida on hurricane alley.

The data center will contain 29,000 square fee of raised floor space, divided into four halls that support the needs of Cisco IT and the company's Linksys, Scientific Atlanta, and Cisco government services group, Cisco officials said.

The data center, which is nearing completion, will adopt Cisco's Information Technology service-oriented model, in which processing power, storage, and communications can be drawn from one big pool of resources only when needed.

The data center will use a number of Cisco tools for application load balancing and server and storage management that are deployed in its current data centers, Picot said. For example, the company's MDS 9000 series switches have a feature called Virtual Storage Area Network (VSAN) built in. Instead of having separate storage devices working together, VSANs provide a way to group together storage into a logical fabric using the same physical hardware infrastructure.

IT equipment is being moved into the new facility now and the first applications should start to arrive the beginning of next year with successive waves over the next few years, according to Cisco.

Hewlett-Packard: Dynamic cool

HP is into the second year of a three-year project to consolidate 85 data centers into six. Three of the centers are primary, and three redundant. When the project is completed in June 2008 there will be two each in Atlanta, Houston and Austin, Texas.

HP has designed the six facilities to be lights-out data centers, capable of being managed remotely. This will be enabled through the use of the company's adaptive infrastructure solutions, said Pat Tiernan, HP's vice president of social and environmental responsibility.

HP also is implementing smart cooling technologies that optimize airflow for cooling of the data centers.

Dynamic Smart Cooling, developed in HP Labs, consists of advanced software residing in an intelligent control node that continuously adjusts air conditioning settings in a data center, based on real-time air-temperature measurements from a network of sensors deployed on IT racks. The technology actively manages the environment to deliver cooling where it is needed most.

Additionally, the sensors can tell when a server is heating up and then direct cold air at that server.

HP wants to make it easier for user of HP equipment to be green ' from the desktop to data centers by building energy-efficiency into products from the start, Tiernan said.

Sun Microsystems: Green is destiny

Sun Microsystems recently opened new data centers in Santa Clara, Calif.; Blackwater, United Kingdom; and Bangalore, India, that were built using innovative designs and next-generation energy efficient systems, power and cooling,

Officials estimate the company's datacenter efforts will save nearly 4,100 tons of CO2 per year and trim one percent from Sun's total carbon footprint.

The data centers were put into operation between January and June of this year. Santa Clara's is the largest, at 76,000 square feet. Efforts to save energy at that facility began with a three-month hardware consolidation and refresh project that increased computing power by more than 450 percent and is expected to save $1.1 million in energy costs a year, Sun officials said.

'Green is a destiny; energy efficiency is a reality,' said Dean Nelson, director of Sun's Global Lab and Datacenter Design Services. If companies strive to make data centers more energy efficient, they will turn green, he said.


Reader Comments

Wed, Feb 18, 2009 Pete Belcher Tazewell,va.

Please come up with some Grants for Wind Turbin Power For us Disabled Vet's.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above