InfiniBand will boost bus speed on servers

InfiniBand will boost bus speed on servers

BY PATRICIA DAUKANTAS | GCN STAFF

A new, switched-fabric architecture could soon start replacing the standard PCI bus in server clusters and data centers.

Proponents of the InfiniBand 1.0 specification, released last October by an industry consortium, say it could raise bandwidth, loosen bottlenecks and build server clusters in separate buildings.

Users hungry for 2.5 Gbps of input/output bandwidth, though, will have to wait until InfiniBand systems start appearing early next year.

The two-year-old InfiniBand Trade Association of Portland, Ore., consists of 215 companies including technology heavyweights Intel Corp. and IBM Corp.

InfiniBand'a contraction of infinite bandwidth'sprang up in 1999 from the merger of two competing initiatives to create a new server I/O architecture, said David Heisey, Compaq Computer Corp.'s manager of advanced technology initiatives. Compaq is one of seven members on the trade association's steering committee.

The PCI bus, which has been around since the early 1990s, runs at 66 MHz in Version 2.1. A later extension, the PCI-X bus, runs at 133 MHz. On the shared bus, devices compete against each other for bandwidth, often creating bottlenecks.

Such bottlenecks pressure administrators to put more and more servers in their data centers, said Alisa Nessler, chief executive officer of Lane15 Software Inc. of Austin, Texas. The InfiniBand fabric would guarantee bandwidth for each connected device.

'I think it's really going to revolutionize computing, but do it in a way that's evolutionary,' Nessler said.

The InfiniBand fabric is woven of three components. Host channel adapters inside servers connect them to the network. Target channel adapters attach storage subsystems, workstations or other servers to the fabric. InfiniBand switches with a signaling rate of 2.5 Gbps link all the devices serially via either copper or optical cables.

On the horizon

Some companies already have announced products that will plug into PCI slots and provide InfiniBand connections to servers.

'I'm not saying that PCI and PCI-X are going away in the next year,' Heisey said. Because of InfiniBand's backward compatibility with the PCI bus, both technologies will probably coexist for a long time, he said.

InfiniBand, however, can combine individual links to get considerably more than 2.5 Gbps in bandwidth, said Mark Richman, manager of InfiniBand strategic marketing at Agere Inc. of Allentown, Pa. A four-channel-wide InfiniBand link would provide 10 Gbps, and a 12-channel-wide link, 30 Gbps. Agere was formerly the microelectronics group of Lucent Technologies Inc. of Murray Hill, N.J.

Early adopters probably will be content with the one- and four-wide links, Heisey said. 'It's going to be a challenge for systems to take advantage of [12 channels and 30 Gbps] initially,' he said.

Although it's early to talk about InfiniBand system costs because most compatible products are still in development, Heisey said he doubts the technology will be high-priced.

'It'll be easier to put a lot of stuff together and make it work,' Nessler said. Servers will run the same applications but run them better, she said.

High-performance computing could also benefit from InfiniBand. Clustering capability could make it easier to build large clusters, Heisey said. The potentially huge throughput, especially for 12-wide links, would make large amounts of mass storage readily available for computationally intensive tasks, Richman said.

inside gcn

  • A framework for secure software

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group