The edge for data centers

Blade servers save on space, power use and management while boosting utilization<@VM>Resources<@VM>Checklist: Blade servers

HALF AGAIN: Hewlett-Packard's C7000 models hold 16 half-height blades, adding another dimension to consolidation.


So you want to consolidate your data center, lower its power
bills, reduce cooling needs, boost utilization rates and simplify
server management? And get rid of some of those cables snaking all
over the floor? Those are increasingly common goals among agencies.
And blade servers are an increasingly popular means of reaching

A blade server is a single circuit board that contains the
minimum necessary components for computer processing '
processors, memory, network connections, storage interface and
possibly hard drives ' without extraneous elements such as a
monitor or keyboard. You slide blades into a blade enclosure, a
specially designed cabinet that provides electrical power, cooling
and connections to the blades.

Blade servers are ideal for specific purposes, such as file
sharing, Web hosting, e-mail and print serving, and even heavy-duty
computation such as forecasting. SGI has created several
blade-based supercomputers ' sporting thousands of
processors, terabytes of memory and cable-free enclosures '
on its Altix ICE platform. However, experts say, blades are not
well-suited for general-purpose computing or very large transaction

Easy management

Manageability and flexibility are also important aspects of
blade servers' appeal. For Luigi Canali, project manager of
the Content Management System (CMS) project at the Bureau of
International Information Programs, the main advantage of blade
servers is their flexibility. CMS handles content management, Web
hosting, search, security, continuity-of-operations, training,
support and Web reporting tools for embassies and the State
Department. As a result, the bureau's CMS hosts more than 100
public Web sites and associated digital products in more than 140

'It's easier to manage 10 blades in one box than 10
boxes,' Canali said.

The comparative ease of managing blade servers is one of the
major paybacks of the technology, said Joe Clabby, president of
consulting firm Clabby Analytics. 'Blades are easier to
manage, requiring fewer people in a smaller area.'

Most people are familiar with tower servers, where each server
is in a separate box, and rack servers, where each server is in a
separate horizontal slot in a rack. In a standard rack-mount
configuration, one rack unit (1U) is 19 inches wide and 1.75 inches
tall. This is the minimum possible size of any rack-mount
equipment. The most common computer rack is 42U high, which limits
the number of rack servers to 42. Blades are not so restricted: As
many as 84 can fit in the same space.

In a tower or rack arrangement, each server has its own power
supply, storage, input/output and other components. Because blades
forgo certain elements of tower servers and rack servers, including
monitors and keyboards, and share others, such as power supplies
and fans, they are cheaper. In addition, server blades are smaller,
more cost-efficient and consume less power than traditional tower
or rack servers.

What's more, blade servers have fewer points of failure
than towers or rack systems.

Putting it together

Despite their relative simplicity, there's a lot to
consider in configuring blade servers.

For starters, many different processor types are available on
blades, including those from Intel and Advanced Micro Devices with
x86 architecture, Sun Microsystems' UltraSparc, Intel
Itanium, IBM Power and the graphics-oriented Sony-Toshiba-IBM Cell
Broadband Engine chip. Some blade enclosures also offer slots for
multiple processors. And newer blades can take advantage of
multi-core chips, such as the quad cores available from Intel and
AMD. The Sun Blade X8420, which costs $10,615, can handle four
quad-core processors, for instance.

Other examples of the variety offered by blade vendors include
Fujitsu's Primergy BX600 line, starting at about $1,948, with
quad-core x86 chips from Intel or AMD. Sun's Blade Modular
System supports UltraSparc, AMD Opteron and Intel Xeon processors.
HP offers AMD and Intel x86 processors in addition to Intel
Itanium. And IBM has AMD, Power and Cell Broadband Engine

Memory is another critical consideration.

Some high-end blades, including the Sun and HP lines, offer as
much as 64G of memory, equal to larger stand-alone servers and
enough to satisfy the most insatiable chip. Having many memory
slots on a blade improves flexibility.

Some Sun blades allow 16 DIMMs. If you need only 8G of memory on
that blade, you can buy comparatively inexpensive 1G DIMMs. But you
also have the option to add higher-priced 4G DIMMs to max out your

Next, don't overlook connectivity. Blade connections to
storage or networks can be ordinary wire ' often called
copper for clarity ' or fiber-optic cable. The input/output
architecture is perhaps of greater importance. 'Blade vendors
typically use a proprietary [input/output] architecture,'
said Mike McNerney, director of the blade server product line at
Sun, which uses industry standard PCI SIG modules, such as PCI

Blades can also include hard drives, which allow the blade to
boot independently of outside storage and store data locally. The
downside is that local hard drives take up space ' so blades
with drives can't be so close together ' and require
more power and cooling. Blades without hard drives can boot from
outside storage, such as storage-area networks.

If you're considering blades, pay attention to the
enclosures, which are a lot more than metal boxes. They are
essential to the operation of the blades, and the design of the
enclosure is at least as complex as the design of the blades.

For example, the enclosure must provide power ' usually
redundant ' and the complex connections to interface with
networks, storage and other blade enclosures.

Enclosures also remove heat, most of which originates with the
processors. Usually, high-speed fans cool the blades by moving warm
air out of the enclosure. For example, Hewlett- Packard's
enclosures can include 10 Active Cool fans. SGI goes a step
further, with water-cooled doors in its Altix ICE systems.

Hidden benefits

Blade systems offer other management advantages. Installing and
maintaining them is simpler than dealing with tower or rack-mount
systems. The enclosures often intelligently tend the needs of their
blades, so there is less monitoring for managers to do.

Management software allows remote, hands-off administration of
large farms of servers.

Also, virtualizing blades ' running multiple logical
threads on one physical blade ' allows them to use processing
power and memory efficiently. Multiple applications can run on a
single blade, yielding the utilization rates that warm information
technology managers' hearts. 'Managers are used to
tower servers with 20 percent utilization,' Clabby said.
'With blade virtualization, they're seeing 75 to 85
percent utilization.'

Energy efficiency is also becoming more important to government
agencies. 'Blade servers can save one-third of the power of a
standalone server,' said Mitch Barcellos, BCS server
specialist at Hewlett-Packard. For example, Dell's new
PowerEdge M-Series blades, starting at $1,849, are designed to
require lower power and cooling levels. SGI said its Altix ICE
energy-smart power architecture realizes more than 90 percent
efficiency on its power supply and as high as 87 percent efficiency
on the blades.

Some vendors, including Hewlett-Packard and Dell, push
consolidation to another level by offering half-height blades and

HP's 10U C7000 enclosure holds 16 half-height blades, and
its 6U C3000 ' called Shorty ' accommodates eight
half-height blades. Dell's M1000e, starting at $5,999, also
holds 16 half-height blades in a 10U format.

Blade technology is new enough that there are no independent
standards for blade enclosures and design yet. This means it often
isn't possible to use one vendor's blades in another
vendor's enclosure. For now, it's probably safer to
obtain both from the same manufacturer.

DeJesus ([email protected]) is a freelance technology

More information abut blade servers:


Blade Systems Alliance

Clabby Analytics






Sun Microsystems


Wikibon Project

1. What applications will you move to blade servers?

2. Do the applications require 32-bit or 64-bit chips? Do they require x86, UltraSparc, Itanium, IBM Power, Cube or other processors?

3. How much memory do the applications require?

4. How much storage do you require? Is local storage necessary, either to boot the server or store locally?

5. What network connections do the blades require?

6. Can these applications run multithreaded for virtualization? If so, ask about virtualization features of hardware and software.

7. Is it important to access other servers within the same enclosure? If so, input/output connections in the enclosure will be important.

8. What power constraints does your agency have? Ask vendors about power and cooling needs of any configuration.

9. How much server consolidation are you aiming for? The more blades an enclosure can hold, the more space you can save.

10. Do you need to consolidate multiple platforms? If so, look for enclosures that can accept blades with a variety of processors.

11. Is disaster recovery important? This will involve multiple, geographically separated servers and possibly remote mass storage.

12. Is integration with existing systems necessary? If so, try to select blade servers compatible with existing systems. This can mean blade servers from the same vendors or blade servers operating under an open architecture that can include the existing systems.


  • Records management: Look beyond the NARA mandates

    Pandemic tests electronic records management

    Between the rush enable more virtual collaboration, stalled digitization of archived records and managing records that reside in datasets, records management executives are sorting through new challenges.

  • boy learning at home (Travelpixs/

    Tucson’s community wireless bridges the digital divide

    The city built cell sites at government-owned facilities such as fire departments and libraries that were already connected to Tucson’s existing fiber backbone.

Stay Connected