What's HSI, and why do NOAA and Air Force use it?

What's HSI, and why do NOAA and Air Force use it?

BY JOHN BUSH | SPECIAL TO GCN

HSI is in the news.

The Air Force is using high-speed interconnect technology for its next-generation ballistic missile early warning system.

The National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory in Princeton, N.J., will use HSI to plot the course of hurricanes and predict global warming phenomena.

At NASA's Ames Research Center, HSI will support computationally intensive climate modeling, nanotechnology and biological simulations. And the National Science Foundation has earmarked a significant portion of its research budget this year for exploration of future HSI applications.


align="right" width="220">

size="2" color="#FF0000">Geophysical Fluid Dynamics Lab researchers use HSI for supercomputer projects such as predicting sea ice thickness over 100 years.

As the lead agency in the government's information technology research, NSF has set aside $153 million'more than 20 percent of its fiscal 2001 budget'for research in advanced computing and network infrastructures. Much of the money will go to university and private partnerships studying computer clustering and high-speed online storage.

In the lead

HSI has flourished over the last few years along with advances in fiber-optic technology. Its leading interconnect architectures are Fibre Channel, typically used for high-speed online storage, and nonuniform memory access (NUMA), which supports online computer clustering and memory sharing.

With channel speeds up to 2 Gbps, Fibre Channel technology is a set of American National Standards Institute standards specifying network configurations and support for legacy network equipment interfaces. The most common configuration is Fibre Channel-Arbitrated Loop.

Although physically configured as a loop, FC-AL is actually hub- or switch-based. Via queuing algorithms for system access requirements and device priority status, an FC-AL switch grants logical point-to-point connectivity between devices on the physical loop.

According to the Fibre Channel Industry Association, a 2-Gbps FC-AL loop can support up to 126 device nodes with full-duplex data transfer rates of 200 Mbps. Such fast channel data rates let hubs or switches manage point-to-point connections among as many as 35 drives at a sustained throughput of about 10 Mbps per drive.

Because FC-AL switches operate at the physical and data link network layers, the interface command sets of enterprise legacy equipment can easily piggyback over Fibre Channel links. Fibre Channel supports standard command sets such as IP device protocols, SCSI, High-Performance Parallel Interface, RAID and standard digital audio-video interfaces.

Connections are self-managing and place few requirements on device input/output ports. FC-AL devices are hot-swappable, ensuring easy maintenance and redundancy.

One of the most common uses for Fibre Channel is in high-speed online storage of enterprise database resources. FC-AL storage area networks are fast enough for data warehousing and data mining applications.

Another common use for SANs is to back up critical data on enterprise LANs without tying up bandwidth. In backup and server mirroring configurations, key LAN servers connect by hub to an FC-AL switch that backs them up to off-network storage devices.

FC-AL can extend an optical channel for 10 kilometers without signal boosters, making it ideal for disaster-recovery backups to secure facilities miles away.

FC-AL can also serve online multidevice cluster applications. Because of their point-to-point nature, the clusters are useful mainly for optimizing network resources, or for load-balancing large enterprise applications.

In contrast with FC-AL, supercomputing with linked processors requires the multiconnection capabilities of NUMA.

Legos, anyone?

Designed for tightly coupled parallel processing, NUMA architectures let agencies build systems brick by brick, adding modules as requirements dictate.

Enterprise application software can run on NUMA systems without modification. NUMA presents an application or operating system with a single, logical memory segment that maps to physical processor memory on physically separate nodes.
A NUMA interconnect functions like a computer's internal processor bus. Modules interact via a high-speed, bidirectional channel. Depending on the configuration, NUMA systems can support hundreds of nodes made up of thousands of processors.

SGI late last year contracted with NOAA's Geophysical Fluid Dynamics Lab to implement a NUMA-based system on 1,152 processors. The new supercomputer will comprise 10 SGI Origin 3800 servers, replacing a number of Cray supercomputers.

The lab will use the NUMA system to run modeling applications that divide the Earth into large, interacting sectors. Performance is expected to reach 922 billion floating-point operations per second.

inside gcn

  • pollution (Shutterstock.com)

    Machine learning improves contamination monitoring

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group