Push is on for data broadcasts

Push-pull technology made quite a splash last year and still has momentum for
agencywide broadcasts. For example, it can be used to standardize every user on a
particular version of software.

Although some federal sites forbid subscribing to outside push channels that monopolize
network bandwidth, push technology can effect change at thousands of desktop PCs better
than broadcast e-mail attachments or simple pointers to an intranet site.

Push is defined as distribution by demand: A central server sends information to
clients on its own schedule, and the clients have to receive it. Pull is distribution on
demand: Each client determines what information it will receive and on what schedule.

Push is valuable when a lot of data must be widely and quickly distributed. Pull may be
more efficient for distributing a great variety of information to recipients with
different needs.

The computer industry has concentrated far more on push than on pull, perhaps because
the the Web constitutes a remarkably effective pull mechanism on its own when security
isn’t a big concern.

Several push systems are commercially available to ease the distribution task for both
providers and recipients.

Such systems are not necessarily interoperable, however. They consume somewhat less
network bandwidth, but there’s still no easy, broad-brush solution.

Timing can be critically important in selecting a distribution mechanism. When and how
often must information be distributed? How intelligent is the scheduling mechanism, and
can it work around unexpected network loads? How are priorities set? How critical is the
timeliness and speed of distribution?

Availability is another consideration. What happens if a client cannot receive a
broadcast? Does the information go to local server storage or is it retransmitted later?

Bulky video or large multimedia graphics broadcasts can bring almost any network to its
knees. Compression, though helpful, is far from a panacea. Most push products include
technology to trim bandwidth requirements.

Location matters, too. If you’re distributing executable software, not just simple
messages, where will the distribution be stored on the client machine? This is critical
for proper operation of the software, but there’s not always a simple answer.

In IP multicasts, the administrator generally controls where the transmission goes. But
what if a user has changed the client configuration or directory structure? Even simple
changes of directories or disk volume names can render a transmitted program unusable. In
such cases, the push system must force the user to take a more active part in loading and
storage—in effect, to pull harder.

In the push model, each client computer has a small agent component, and the
administrator controls the data distribution from a central server. An early example of
push was the PointCast Network from PointCast Inc. of Sunnyvale, Calif., familiar to
federal employees through its government FedCast channel. A client screen saver displays
short items broadcast over the Internet.

The television metaphor pioneered by Castanet from Marimba Inc. of Mountain View,
Calif., tunes in channels of information via client software. Microsoft Corp. has made
this an optional part of its Active Desktop under Internet Explorer 4.0 and Microsoft
Windows 98.

Another mode of distribution, called publish and subscribe, publishes the message to
several central sites, and each client is configured to monitor one of the sites. When
there is a change, the client downloads the new information.

The most promising technology at the moment is IP Multicast, an Internet method for
efficiently distributing multiple files and multimedia programs.

As with most server-level products, choosing a push server is not simple. Products
differ quite a bit in features and price. Some are better at straightforward file
transfer; others excel at real-time multimedia transmissions. Some require a constant
connection to the user; others are good for interrupted channels.

Starburst Multicast from Starburst Communications Corp. of Concord, Mass., is expensive
and designed for large organizations, and it mainly does file transfers, but it’s

Marimba’s Castanet 3.1 and Rendezvous 4.2 from Tibco Inc. of Palo Alto, Calif.,
are good for refreshing volatile file content.

PointCast excels at delivering small amounts of data quickly and efficiently.

Organizations that have built their intranets around mostly Microsoft platforms would
benefit from CyberPrise 2.0 from Wall Data Inc. of Kirkland, Wash., and BackWeb Infocenter
4.0 from BackWeb Technologies Inc. of San Jose, Calif.

Sites that need to push multimedia, especially video, should look at Wayfarer 4.0 from
Wayfarer Communications Inc. of Mountain View, Calif.; NetPresenter 3.0 from NetPresenter
B.V. of the Netherlands; Intel Corp.’s ProShare; and Microsoft NT Server’s
NetShow Streaming Media Services.

If distributions will focus on remote workers who aren’t always online, consider
RemoteWare Express from Xcellenet Inc. of Atlanta.   n

Russell Kay of Worcester, Mass., has been writing and consulting about computer
hardware and software for 17 years.

First came the telephone. It operated in real time, requiring sender and receiver to be
physically available at the same time. The sender had to know the receiver’s phone
number, and there was no permanent record of the call.

Then came fax. It eliminated the need for simultaneous connections and separated
message creation from transmission and reading. It also kept hard copy. Fax was great for
graphical elements such as diagrams and signatures, although quality and speed were low.

E-mail likewise did not require schedule coordination. It was faster and less
error-prone than fax, and it kept a storable, searchable record. Although e-mail was
limited at first to internal networks, the Internet has made it global.

Each of these communication modes began as point-to-point, one-sender-to-one-receiver
vehicles. Over time, each expanded to a broader scale: conference calls, fax broadcasting
and mail list servers.

A computer network has three ways to send identical information to multiple receivers:
broadcast, multiple unicast and multicast.

Broadcasting is sending a single stream of data to every station whether or not the
user wants it. Because the mechanism suits few real situations, and because rebroadcasting
degrades network speed, routers are often configured to block all broadcast data.

Multiple unicasting—a separate message to each node—is more effective but
less efficient. It chews up server and client processing cycles and hogs bandwidth.

To bridge the gap, the Internet Engineering Task Force’s Request for Comments 1112
proposes a standard called IP Multicast, which defines extensions to the Internet
Protocol. IP Multicast sends out a single data stream picked up only by stations that have
joined the multicast group. Other stations filter out the foreign packets in hardware at
the token-ring or Ethernet level.

Multicast works only through multicast-enabled routers, so networks or segments without
such routers need never deal with such traffic. Also, the multicast routers for segments
that have no users associated with a particular multicast group will ignore that specific

The best-known IP Multicast implementation now is Starburst Communications’
Multicast File Transfer Protocol for file distribution.

MFTP cannot handle real-time applications such as videoconferencing. It excels,
however, at distributing software and transferring business-critical information quickly.
It works well over any type of link, from satellite to WAN to dial-up.

Lucent Technologies Inc.’s Reliable Multicast Transport Protocol handles file
transfer and real-time and near-real-time applications equally well.

But a lack of applications, combined with insufficient router and switching support,
continues to block significant IP Multicast deployment. That may change as a result of the
IP Multicast Initiative. See the details at http://www.ipmulticast.org/.

One communication vehicle that was many-to-many from the start was the newsgroup,
embodied in the Internet’s Usenet.

Newsgroups have tended to be unmoderated and unrestricted; anyone could read anything
posted to a newsgroup, and anyone could post a message. Besides the advantages of access
and interactivity, the newsgroup’s great strength is the ability to thread
discussions about a given topic regardless of intervening off-topic messages.

But Usenet, especially the uncontrolled alt.* groups, quickly acquired a reputation for
abusive flame wars. The more technical groups have evolved into effective discussion
forums. As a content source, however, the Usenet has been largely inapplicable to most
government uses.

The technology that drives it is a different matter. Network News Transfer Protocol
servers such as Netscape Communications Corp.’s Collabra and those included with most
Web servers have joined with newsreader software in browsers and freestanding programs
such as FreeAgent from Forte Inc. of Carlsbad, Calif., to serve organizational purposes.

All an organization has to do is add a level of security through passwords and secure
Internet connections, and it can conduct time-independent, interactive, multiple-access
internal discussions.

The same Internet news technology has been adopted by many organizations in a privately
administered, public-access manner to deliver customer service, distribute information or
hold focused discussions.

NNTP makes a quicker alternative to the Web’s Hypertext Transfer Protocol. Users
can access newsgroups easily via ordinary browsers or newsreaders over any dial-up or
direct Internet connection.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.