Expanding Ethernet

When IBM Corp. designed Lawrence Livermore Laboratory's BlueGene/L supercomputer system, it packed two processors into each of the 65,536 nodes. While one central processing unit does the actual application computing, the second processor is dedicated almost purely to communications, explained IBM engineer Thomas Gooding during a presentation at November's SC06 supercomputing conference. Although BG/L handles a particularly large amount of network traffic, the design illustrates a growing concern among all data center managers: namely, that network traffic is taking up more of their servers' processor resources.


Such resource-hogging is not only a supercomputing problem, either, noted Rick Maule, CEO of NetEffect Inc. of Austin, Texas. A Gigabit Ethernet card may draw about 8 percent of host processor cycles, in order to execute such duties as segmenting data into outgoing packets and reassembling data from incoming packets.


But a 10-Gigabit Ethernet card doing those same tasks can take up all the resources, simply because there are a lot more packets to process. Obviously, this approach won't scale. 'You can't have network overhead taking 100 percent of the CPU, because the application must run on something,' Maule said.


To address this concern, a consortium of networking-equipment vendors'such as Hewlett-Packard Co., IBM and Intel Corp.-developed an extension to the Ethernet protocol, called iWarp, designed specifically to remove the load from the CPU. At the SC06 show, NetEffect demonstrated the first adapter with a full implementation of iWarp, the NE010. The NE010 virtually eliminates network load on the CPU, according to Maule. It also reduces latency (the time it takes a card to respond to a request from the network) to under 10 microseconds'about a quarter of the best Ethernet response time today.


iWarp moves the load off the CPU in a variety of ways. An additional chip on the network card itself handles some of the tasks, and an iWarp protocol allows the user application to issue commands directly to the adapter, bypassing the OS stack. Another technique, called Remote Direct Memory Access, embeds information within each packet that allows the data to be placed by the adapter directly into the memory used by the application, thereby eliminating a trip through the OS's own memory.

About the Author

Joab Jackson is the senior technology editor for Government Computer News.

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above