InfiniBand goes the distance
Researchers at the Energy Department's Oak Ridge National Laboratory have shown that InfiniBand can be used to transport large datasets via a dedicated network thousands of miles in length with a throughput unmatched by high-speed TCP/IP connections.
In a test setup, researchers were able to achieve an average throughput of 7.34 gigabits/sec between two machines at each end of the 8,600-mile optical link. In contrast, the throughout of such traffic using a tweaked high-throughput version of TCP, called Hyper Text Caching Protocol (HTCP), was 1.79 gigabits/sec at best.
Oak Ridge researcher Nageswara Rao presented a paper on the group's work, "Wide-Area Performance Profiling of 10GigE and InfiniBand Technologies," at the SC08 conference last month.
Increasingly, DOE labs are finding they need to move large files over long distances. In the next few months, for instance, the European Union's Large Hadron Collider will start operation, generating petabytes of data that will cross the Atlantic Ocean to DOE labs and U.S. academic institutions.
Rao said difficulties abound with large data transfers over high-speed wide-area networks, including packet conversion from storage networks and the complex task of TCP/IP tuning. "The task of sustaining end-to-end throughput…over thousands of miles still remains complex," the researchers wrote in the paper.
Although InfiniBand interconnects are widely used in high-performance computer systems, they aren't usually deployed to carry traffic long distances. Instead, they typically convert traffic into TCP/IP packets at the edge of each endpoint, either by 10 Gigabit Ethernet (10GigE) or some other protocol, and convert it back to InfiniBand at the other end. However, a few vendors, such as Obsidian Research and Network Equipment Technologies, have started offering InfiniBand-over-wide-area devices, which allows the traffic to stay in InfiniBand for the whole journey.
Oak Ridge officials wanted to test how well those long-distance InfiniBand connections could work in comparison to some specialized forms of TCP/IP over 10 Gigabit Ethernet.
Using DOE’s experimental circuit-switched test-bed network UltraScienceNet, the researchers set up a 10 Gigabit optical link that stretched 8,600 miles round-trip between Oak Ridge — which is outside Knoxville, Tenn. — and Sunnyvale, Calif., via Atlanta, Chicago and Seattle.
At each endpoint, they set up a series of Obsidian Research's Longbow XR InfiniBand switches, which run InfiniBand over wide area. The network was a dual OC-192 Synchronous Optical Network, which could support a throughput as fast as 9.6 gigabits/sec.
Overall, the researchers found that InfiniBand worked well at transferring large files across great distances via a dedicated network. For shorter distances, HTCP ruled: It could convey 9.21 gigabits/sec over 0.2 mile, compared to InfiniBand’s 7.48 gigabits/sec. But as the distance between the two endpoints grew, HTCP's performance deteriorated. In contrast, InfiniBand’s throughput stayed pretty steady as the mileage increased.
However, Rao said HTCP was more resilient on networks that carry additional traffic. That is not surprising because TCP/IP was designed for time-sharing networks — that is, networks that carry traffic between multiple endpoints. Tweaking TCP/IP to take full advantage of a dedicated network takes considerable work and still might not produce optimal results, Rao added.
In conclusion, the researchers found that InfiniBand "somewhat surprisingly [offers] a potential alternate solution for wide-area data transport."
The Defense Department and DOE’s High Performance Networking Program supported the research.