IP extends the storage area network

SANblade QLA4010 is a multimode host bus adapter that integrates two RISC processors with its iSCSI controller.

Alacritech's 1000x1 Single-Port Server and Storage Accelerator is a high-speed, copper or fiber TCP offload engine, priced at $999 for a copper port and $1,399 for a fiber port.

The foundation for storage area networks could be shifting toward the Internet.

At the moment, Fibre Channel SANs rule. They are dedicated, centrally managed and secure systems used to connect servers to storage devices across enterprise LANs.

All SAN architectures are redundant. If one server fails, other failover application servers can still access data. They provide round-the-clock data backup and can allocate storage capability where it is needed across the network.

Fibre Channel SANs enable the transmission of blocks of data at gigabyte speeds. Their use of centralized data management tools reduces the burden on servers with high-demand storage applications by spreading the data load around to other servers on the network. And because they are highly scalable, they can be quickly configured to meet rocketing storage demands quickly and easily.

But while Fibre Channel is a high-performance transmission technology, it does have some drawbacks for SAN use. For one thing, it's costly to install and maintain'a fact compounded by difficulty in finding personnel with specialized Fibre Channel training, according to an Intel Corp. white paper.

Fibre Channel technology also has distance limits. The same Intel paper said that although the theoretical limit for Fibre Channel is 10 kilometers, individual multimode fiber links used in Fibre Channel SANs could have a practical limit of 250 to 500 meters.

That could put a crimp in emergency procedures: As part of their disaster planning, many large organizations have placed their SANs far from their LANs to provide geographical redundancy in case of a disaster. In that case, even 10 kilometers could be inadequate, according to Intel.
Finally, Fibre Channel devices have a spotty track record on interoperability between one vendor's products and another's.

Now there's a new storage game in town. A handful of SAN vendors have unified IP storage networks by using the draft Internet SCSI protocol for encapsulating SCSI commands into TCP/IP packets and enabling block data transport over IP networks.
[IMGCAP(2)]
SCSI lets host computers perform block data I/O operations to peripheral devices using direct cabling over short distances. The iSCSI protocol specifies transporting SCSI data over TCP/IP networks, thus expanding the distance limitations of SCSI data transfers from a few meters to thousands of miles.

Servers and storage devices that support iSCSI are directly connected to an existing IP switch or router to set up the IP SAN.

The marriage of SCSI and TCP/IP results in an IP SAN that is easier to design, integrate and manage than traditional SANs and that has the potential to reduce costs while solving interoperability and distance problems.

This accompanying chart lists most of the iSCSI products on the market to date. But industry analysts expect this year to be a turning point for iSCSI, so expect the list to grow over the next several months.

Mostly bus adapters

So far, the most common products in this category are iSCSI host bus adapters. They reside in servers, routers, switches and storage devices and are the main components of IP SANs.

Also listed in the chart are a handful of iSCSI switches, routers, servers, concentrators and actual iSCSI storage arrays.

Despite its advantages, iSCSI faces a few hurdles.

There are only a few iSCSI storage arrays, while problems with routing, pricing and performance are worked out.

In addition, the general adoption rate of SANs of any type is still slow. Among large and very large organizations, the adoption rate of Fibre Channel SANs is around 70 percent. But across small and midsize organizations, the deployment rate of SANs sinks to around 25 percent, according to International Data Corp., a research firm in Framingham, Mass.

And even with the rapid development of specialized host bus adapters that can offload TCP/IP and iSCSI storage workloads from host CPUs, there is still concern about the performance of IP SANs.

Finally, some questions about the ability of IP SANS to handle IP security issues remain unanswered.

These negatives aside, market research firm Gartner Inc. of Stamford, Conn., predicts the market for iSCSI host bus adapters will increase from $590 million last year to $1.22 billion in 2005. IDC predicts that the total iSCSI market will double that figure, to $2.48 billion, in 2005.

Gartner predicts that by 2006, iSCSI will emerge to connect nearly 1.5 million servers to SANs'more than any competing technology.

Check out the Storage Networking Industry Association Web site, at www.snia.org, for a regularly updated list of iSCSI hardware manufacturers.

The iSCSI specification presents a set of challenges for would-be IP SAN users because setting up an IP SAN is more than a matter of plugging in an iSCSI adapter and letting it pump blocks of data from your server to your switches. An IP SAN can run on standard Gigabit Ethernet switches, but processing iSCSI and TCP/IP protocols creates significant overhead, which can quickly overwhelm your server's CPU. Accordingly, there are at least three approaches to selecting an iSCSI host bus adapter.

A standard network interface card with an iSCSI driver. In this approach, just installing an iSCSI driver on your server will turn your standard network interface card into a storage access device. Connect to the storage array's management software, adjust for the proper volume of data flow and let the volume management tools of your system's operating system do the rest.

A TCP offload engine NIC with an iSCSI driver. Most TOE NICs remove processing overhead by handling it themselves'only iSCSI processing remains. Trebia Networks Inc.'s SNP-500 TOE NIC can function purely as a TOE or simultaneously as a NIC, completely offloading iSCSI and TCP/IP protocol processing from the host CPU.

Host bus adapters. Most host bus adapters are designed by Fibre Channel vendors specifically to handle iSCSI traffic. A key point to remember here is that, unlike Fibre Channel HBAs, iSCSI pricing is directly related to performance.

Thus, low-end performance using a standard Ethernet NIC with an iSCSI driver is free; prices rise accordingly for high-end TOE and HBA accelerators capable of handling both TCP and iSCSI traffic.

J.B. Miles of Pahoa, Hawaii, writes about communications and computers. E-mail him at jbmiles@hawaii.rr.com.

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above