New backbone pumps up DISN
- By William Dutcher
- Jan 13, 1997
Defense Department users can look forward to faster e-mail, file transfers and World
Wide Web surfing after the Defense Information Systems Network's transition to an
asynchronous transfer mode backbone and consolidated multiplexer access.
DISN supplies the global wide area backbone for NIPRnet and SIPRnet-the Nonclassified
IP Router Network and the Secret IP Router Network for sensitive data. DISN also serves
the smaller Joint Worldwide Intelligence Communications Network, or JWICS.
To confuse matters, the Pentagon also uses the term DISN to describe an umbrella WAN
for the larger Defense Information Infrastructure. The DII includes other special-purpose
networks such as the Defense Simulation Internet and the WANs that supply voice service
over the Defense Switched Network.
Ever since the Defense Information Systems Agency migrated Defense subscribers, hosts
and networks from the older Defense Data Network to the DISN Data Services networks last
April, all NIPRnet and SIPRnet communications have been routed over the DISN collection of
high- and medium-speed point-to-point circuits.
For the past two or three years, those
circuits have terminated at IDNX statistical multiplexers supplied by Network Equipment
Technologies Inc. of Redwood City, Calif. The IDNX muxes manage the bandwidth on the
point-to-point circuits that carry data among our military installations worldwide.
DISN's presence eliminates the need for DOD agencies to set up separate global
connectivity. But DOD customers can't plug directly into the IDNX muxes or the DISN
trunks. Instead, subscriber networks and hosts must connect to Cisco Systems Inc. routers
owned by DISA that comprise the NIPRnet and the SIPRnet.
So, although DISN is the backbone WAN that provides data transport, DISA's subscribers
interface directly to NIPRnet and SIPRnet routers, which access the long-haul DISN
Under the old DDN structure, each classified and unclassified network had its own WAN.
This led to duplicate links, which get very expensive in a worldwide WAN. Under the new
DISN Data Services network structure, both NIPRnet and SIPRnet share the same circuits.
Mixing classified and unclassified traffic on these circuits raises many security
issues. However, the NIPRnet and SIPRnet are isolated from each other by separate routing
tables, by encryption on the SIPRnet, and by the IDNX muxes, which direct traffic across
single links in adjacent but separate channels.
DISA's target data rate for the trunks initially was 1.5-megabit/sec T1, but in many
cases there are slower 512-, 256- and 128-kilobit/sec circuits connecting the IDNX muxes.
Particularly in the Pacific, T1 circuits either cost too much or can't be justified by
low traffic loads. In some places, they simply aren't available.
DISA leases the DISN circuits from a variety of U.S. long-distance carriers, commercial
satellite companies and overseas providers. In parts of Europe and the Far East, DISA runs
its own communications networks. For example, the Digital European Backbone (DEB) is a
microwave system in southern Europe that supplies DISN circuits as well as other data and
Although it serves as the long-haul sinew of the DISN Data Services networks, DISN
wasn't created as a fresh, whole design. In fact, much of the DISN, NIPRnet and SIPRnet
were built from pieces of other DOD networks.
Then DISA absorbed the Navy's router network, NAVnet, and the Defense Logistics
Agency's large DLAnet.
These acquisitions did give DISA a worldwide data network, but it was configured for
the needs of disparate services and agencies.
From its 1994 inception, the NIPRnet-the biggest and most heavily used of the
networks-had problems with convoluted routing, inadequate backbone bandwidth and service
outages. The NIPRnet came online just as traffic started pouring through it out onto the
Internet, swamping the backbone's capacity.
In the fall of 1995 DISA started upgrading many DISN circuits to T1s to resolve the
most pressing bandwidth problems. But this was only a temporary fix because the network
still needed significantly more bandwidth. Traffic quadrupled in 1995, mostly because of
DISA is upgrading the DISN circuits again by installing a 45-megabit/sec asynchronous
transfer mode network to connect NIPRnet routers in the continental United States.
At the same time, DISA has reorganized the NIPRnet routers within the U.S. into 12
regions. Core routers in each group connect directly through the ATM cloud instead of
through lower-level hub router links.
From an operational standpoint, the top-level ATM cloud will bypass lower-speed links
that may pass through several routers. Instead, long-haul traffic will jump quickly to the
ATM cloud, then exit at a NIPRnet router closest to its destination. If the ATM backbone
works well, DISA plans to expand it to the rest of the NIPRnet.
In another ATM initiative, DISA is installing a second 45-megabit/sec ATM network to
connect the U.S. parts of the NIPRnet with the NIPRnet segments in Europe and the Pacific,
and with the Internet. This second ATM network, the Joint Interconnection Service (JIS),
will give NIPRnet users worldwide a high-speed backbone as well as a 45-megabit/sec path
to and from the Internet.
A more subjective aspect of the new ATM backbone is marketing. As DISA moves into a
bleeding-edge technology like ATM, it must attract more DOD customers to share the costs.
To reduce the expense of local access circuits to NIPRnet routers, DISA plans to move
the IDNX network outside DISN, consolidating access circuits on the lower-cost IDNX
William Dutcher, a senior network analyst for Science Applications International Corp.,
teaches courses on DOD networks and can be reached at email@example.com.