A switch in time

Access to all your data can depend on having the right storage switch ' and that depends on what kind of storage network you have

Storage switch checklist

Does the switch have a fixed number of ports or is it expandable? How long will it continue to meet my organization's projected storage needs?



Does the switch use Fibre Channel or Internet SCSI? Does it support Fibre Channel over IP or Fibre Channel over Ethernet?



Is the switch's speed fully compatible with our storage servers? Will we have to upgrade any host bus adapters or network interface cards?



What is the vendor's upgrade path?



Does the switch include additional features, such as security, management or translation tools?



Does the switch need to interact with any other switches in the same storage area network? If so, is it compatible with those switches?



Do we have the expertise in-house to manage this switch, or will we need to hire new employees or train our current ones?

Data storage resources

Association of Storage Networking Professionals

www.asnp.org.


Brocade Communications Systems

www.brocade.com.


Cisco Systems

http://www.cisco.com/en/
US/products/hw/
ps4159/index.html


Emulex

www.emulex.com


Fibre Channel Industry Association

http://www.fibrechannel.org


QLogic

http://www.qlogic.com


Resilient Storage Networks by Greg Schulz Digital Press Books (Elsevier)


Storage Networking Industry Association

http://www.snia.org

The growing demand for data storage is no secret; most enterprises are reporting double-digit annual growth. Fortunately, disk storage capacity is keeping pace, and prices keep dropping ' about $2 per gigabits for primary storage and less than $1 per gigabits for secondary storage.

But storing data is only half the game. Capacity isn't worth much if users can't access data in a timely manner. And because storage is input/output intensive, it means you need a storage switch that not only manages current traffic but also can scale and adapt to future needs.

'Our biggest challenge was making sure we didn't overtax the switches,' said Elbert Shaw, a project manager at Science Applications International Corp. He oversaw the establishment of four data centers in Europe for the Defense Department and now runs the Data Operations Center at NASA's Marshall Space Flight Center in Huntsville, Ala. 'We made the decision early to buy a better switch so we wouldn't be confronted with problems of growth.'

Choosing the protocol

The first thing to consider in selecting a switch is which of the two primary protocols to use: Fibre Channel (FC) or the Ethernet-based protocol Internet SCSI.

'Ethernet traditionally focused on lower cost, mass adoption, interoperability and economies of scale, with performance focused on bandwidth, while FC focused on low-latency deterministic or predictable performance for storage-specific applications,' said Greg Schulz, founder of the StorageIO Group and author of the book 'Resilient Storage Networks.'

Storage-area network products generally support switches that use either protocol.

Disk drives use the regular SCSI protocol for reading, writing and other instructions, but SCSI doesn't work well with Ethernet technology, and if a packet is delayed, the drive will send another command.

'If you get a couple of nodes bursting out lots and lots of retries because there is no flow control, suddenly the whole thing comes to a halt,' said Robert Passmore, a research vice president at Gartner.

FC was designed to address that issue, and it has a much lower overhead than TCP/IP for input/output-intensive applications. The first FC switches hit the market in 1997, and they continue to dominate the higher-end SAN market. When Brocade Communications Systems acquired McData early last year, the FC switch market was reduced to three vendors: Brocade, with nearly 70 percent of the market; Cisco Systems, with about 30 percent; and QLogic, with 1 percent to 2 percent.

The iSCSI protocol, which emerged in 1994, uses TCP/IP to run SCSI commands on top of Ethernet, a strategy that has become popular with smaller SANs. It has an initial cost advantage over FC because it relies on standard network interface cards, which cost about $200, rather than the more expensive host bus adapters (HBAs) required for FC. Furthermore, iSCSI uses standard Ethernet network switches. The downside is that it requires additional network overhead.

'If you value connectivity costs and your SAN requirements are very modest in terms of the number of servers involved and the performance requirements of those servers, then you would consider iSCSI and buy Ethernet switches,' Passmore said. 'If your SAN is more serious ' more servers, servers with very high performance requirements ' then Fibre Channel is the only game in town and you should buy Fibre Channel switches.'

Director or basic fabric?

Once you've picked your protocol, you'll need to settle on the class of switch you want to employ ' director or base fabric.

The primary difference is the number of ports. Director-class switches have 128 to 512 ports, while the smaller, less expensive base fabric switches offer fewer ports. In addition, director-class switches use an interchangeable blade architecture and do not have a single point of failure.

The class you select will depend on the scale of your application. Director-class switches work well in major data centers, and base fabric switches are more appropriate for small branch offices.

Shaw used Cisco's MDS 9509 director-class switches for DOD's main data centers in Europe.

The 14U rackmount chassis can hold up to nine blades and 336 FC ports. The smaller data centers received the 7U MDS 9506 switches, which scale to 192 ports. All the switches use 2 gigabits/sec FC. He said he chose the 9500 series switches because they allowed the SANs to scale without replacing the switches.

The Census Bureau's data center in Suitland, Md., also selected director-class switches for its Xiotech Magnitude 3D 3000 and 3100 SANs. The 3000, which went online two years ago, consists of a primary 64T SAN with a Brocade 48000 switch; a 14U chassis that holds up to eight FC blades with 16, 32 or 48 ports per blade; and a 64T Magnitude 3D 3000 mirrored SAN with a Brocade 24000 switch. The bureau is in the process of installing a new 50T Xiotech 3100 SAN that will also use a Brocade 48000 switch.

Roy Ashley, acting chief of the bureau's Systems Architecture Branch, said the scalable architecture has come in handy. He and his colleagues have added a Brocade FR4-18i routing blade to one of the chassis and recently bought an additional 32-port switch card to ensure that switching never has a negative impact on operations.

'Our fabric has never been a point of bottleneck at all,' Ashley said. 'In fact, when we look at the metrics, most of our operations are not even a blip on the [input/output] radar.'

Speed limits

When designing your storage switch system, it's also important to consider your speed requirements. FC is commonly available in 2, 4 and 8 gigabits/sec speeds. Ethernet switches come with speeds as fast as 10 gigabits/sec. But there is a price difference: With FC, the price per port generally stays the same or even comes down as the speed increases.

'Each new generation of HBA, because it is made with a new chip at a new chip size, usually results in a higher-performance adapter costing less than the old one at the slower speed,' Passmore said.

For example, Census officials bought a 2 gigabit/sec Brocade 48000 switch two years ago, but the one they are buying now is capable of 4 gigabits/sec because the Brocade 48000 now only comes with 4 gigabit/sec and 8 gigabit/sec blades.

Ethernet switches come with speeds of 1 gigabit/sec or 10 gigabits/sec, but the faster technology is new and considerably more expensive.

In general, it would be advisable to wait for the price to come down before buying Ethernet switches at the higher speed.

For the most part, assessments about speed should take future needs into account to ensure top performance as your system grows.

'Most servers don't know what to do with 1 gigabit/sec much less 2, 4, 8 or 10, so for the vast majority of servers, this is not an issue,' Passmore said.

A storage switch is not a stand-alone item, and it must integrate with the rest of the storage, network and data-center architecture. Storage vendors certify and sell switches that work with their products, and agencies and departments would be wise to follow their lead. After all, a high-end storage array has more than 10 million lines of code, and the switch must interact perfectly with the array-control software.

'New architectures require you to significantly change your code base, which creates reliability problems for a while,' Passmore said.

'When [vendors have] a product that is stable, well-featured and doing the job, they are very reluctant to upset that apple cart.'

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above