Conference hashes out supercomputers, Next Generation Net

ORLANDO, Fla.—The Next Generation Internet will connect at least 100 federal and
university research facilities at rates 100 times faster than are now possible, according
to the National Science and Technology Council.


NGI will bring benefits in supercomputing collaboration, digital libraries, remote
access and simulation, NSTC members said last month at the Institute for Electrical and
Electronics Engineers Computer Society’s High-Performance Networking and Computing
conference. The price, officials said, could run as high as $2 billion over the next five
years.


They also said advanced speech recognition will get a boost from the Navy’s
installation of a standardized speech application programming interface aboard the USS
Coronado, an information warfare command ship.


The conference drew competing claims by vendors and federal research facilities over
which has the fastest supercomputer (GCN, Nov. 23, Page 13).


The Energy Department’s Los Alamos National Laboratory in Los Alamos, N.M.,
announced that its Silicon Graphics Inc.


Blue Mountain supercomputer can execute 1.3 trillion floating-point operations per
second and eventually as many as 3 teraFLOPS.


Energy’s Brookhaven National Laboratory of Upton, N.Y., claimed to have built the
world’s fastest multipurpose, noncommercial supercomputer, which executes 0.6
teraFLOPS. The 9-foot-high, water-cooled QCDSP unit cost $1.8 million and will serve
quantum dynamics researchers.


Show visitors in Orlando viewed a 3-D simulation of a collision between neutron stars,
as predicted by Albert Einstein’s general theory of relativity, computed
simultaneously on Cray Research Inc. T3E supercomputers in Berlin and at the San Diego
Supercomputing Center.


“This a demonstration of the power of distributed computing,” said Jason
Novotny, representing the National Laboratory for Applied Network Research. “We are
coupling supercomputers and high-speed networks to handle very powerful simulations.”


During the conference, a group of hardware and software companies announced
specifications for OpenMP APIs for C and C++, which would let programmers design or modify
parallel applications more easily. Compaq Computer Corp., Hewlett-Packard Co., IBM Corp.,
Intel Corp., Kuck & Associates Inc. of Champaign, Ill., Silicon Graphics and Sun
Microsystems Inc. joined in the announcement.


They said the OpenMP APIs will make it easier for software vendors and in-house
developers to write efficient parallel applications that will run on multiple hardware
platforms under Microsoft Windows NT or Unix.


Silicon Graphics’ chief scientist, Greg Chesson, outlined improvements in the
Mountain View, Calif., company’s Gigabyte System Network (GSN) linkage architecture
for internal system transmissions.


“Having reliable links means having some sort of error detection and
retransmission, or possibly forward error correction,” Chesson said. “GSN uses
32 data bytes and eight control bytes for error detection in a micropacket, plus a sliding
window protocol that supports retransmission of damaged micropackets.”


In scheduled transfers over a GSN link, both end points must be prepared by a so-called
handshake before any data is transmitted. “There are two flavors of the memory
allocation handshake,” Chesson said. “One provides memory that is used once; the
other provides memory that is used arbitrarily many times until released.”  

inside gcn

  • artificial intelligence (vs148/Shutterstock.com)

    Government leans into machine learning

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above