The server's the crux of the LAN matter

The GCN Lab recently surveyed the desktop computing landscape [GCN, Oct. 20, 1997, Page
23]. Now here's a map for what's down the network road.


At the hub of enterprise networks, servers direct the flow of data to computers,
peripherals and other servers. They receive data from all those sources and others such as
digital cameras, remote-access servers and mainframes.


Once there were only two types of PC LAN servers: file-and-print and application
servers. File-and-print servers furnish shared storage space and printer access.
Application servers do more. They make a shared environment for applications such as
Microsoft Exchange e-mail, Lotus Domino programs or Oracle Corp. databases.


Under the Microsoft Windows NT Server network operating system, that line of separation
has blurred. NT runs application services, but sometimes it also does file service.


Adding to the confusion is the growth of intranet and Internet Web sites, which also
have servers. Most often, Web servers are file servers, but occasionally they act as
application servers.


A more precise terminology is evolving as the market distinguishes entry-level or
workgroup servers from midrange or departmental servers, and from high-end or enterprise
servers.


Processors. How will these server categories match up with the processors now
arriving? Intel Corp.'s road map says low-end servers have either Pentium II or Pentium
Pro processors, while the high end still belongs to the Pentium Pro.


But the Pentium Pro's successor will come from the Deschutes chip family, which Intel
makes with a new 0.25-micron process.


A symmetric multiprocessing (SMP) Deschutes-type chip will debut in June, fitting into
a new Slot 2 system bus that's faster than the current Pentium II Slot 1 bus.


Slot 2 machines also will sport a 100-MHz external bus that links multiple processors.
Some server makers already are designing eight- and 16-way servers based on this new
processor. Industry experts predict that a four-way SMP Deschutes system will outperform
today's most advanced, eight-way Pentium Pro servers.


Meanwhile, a new server chip set called 450NX will replace the current 450GX set. The
450NX will not support an Accelerated Graphics Port, because superfast graphics have
limited use on servers. But more important, the new chip set will operate a 100-MHz bus
with 2M of Level 2 cache.


Pentium II processors perform well as small-scale application servers, which do more
processing than file servers. At present, Pentium II servers are limited to two
processors, and many Pentium II server lines are limited in management and scalability
features.


Pentium Pro servers were designed to go as high as four processors and make much better
enterprise application servers. Some vendors build in six, eight or even 10 Pentium Pro
processors; those with more than four are proprietary designs.


The Pentium Pro's successor promises to handle multiprocessing better and to scale more
effectively at four-way and higher configurations. Unisys Corp. has plans for a 32-way
Deschutes server by the end of the year.


Down the road, Intel will join the 64-bit world with a processor code-named Merced, a
joint development project between Intel and Hewlett-Packard Co. Merced likely will run a
certain amount of 32-bit code, although it is optimized for 64-bit applications.


A similar situation occurred with the Pentium Pro, which was optimized for 32-bit
applications but could handle 16-bit instructions by taking a serious performance hit.


When Intel and Hewlett-Packard release Merced in the second half of next year, servers
will run even complex instructions faster than current Pentiums, thanks to 128-bit
instructions. But Merced also will run parallel instructions, which no current PC
processors can do.


Merced's Level 2 cache will have the same clock rate as the processor; current cache is
limited to half the processor's speed. This heavy emphasis on performance means that
Merced should match up well against RISC Unix servers.


We may also see a 32- to 64-bit transition chip, code-named Tanner. It would run 32-bit
code, but on a platform with Merced-like features.


Input/output. Now let's talk about bandwidth--not the network kind, but
internal.


Whether you administer a file server, an application server or both, your bottlenecks
might not be where you think. A server stands or falls by how well it uses its resources.


Even the fastest processor gives poor server performance if the rest of the system
works too slowly.


Orchestrating data handling with the rest of the machine takes a toll on the CPU. Under
current architectures, every time a file is pulled from the disk storage subsystem and
sent to a user's network interface card, the CPU must act as the middleman.


The more user I/O requests, the slower the server. This is the Achilles' heel of
scalability.


A solution is in the works from a collection of industry vendors called the I2O Special
Interest Group. Its proposed standard will give PC servers a leg up to the market's high
end.


I2O stands for intelligent input/output. I2O hardware and software will free the system
CPU from managing all the I/O. Instead, I/O functions will rely on a secondary processor
such as Intel's i960.


As more and more I/O devices join a networked system, I/O data handling results in
obstructions. Multiple NICs and drives are not at all unusual on today's mixed-network
servers, which explains the huge pressure to boost system throughput.


The I2O standard will change how servers handle throughput. One goal of the standards
group is to make I/O device drivers independent of the OS. Instead, a messaging layer will
buffer the OS from the physical devices.


This layer will let OS vendors write a single driver to shuttle data between the OS and
the messaging layer. Hardware vendors will only have to write one driver from their
physical devices to the messaging layer. Servers will no longer need multiple drivers to
serve multiple environments.


The I2O standard now is in Revision 1.5. Revision 2.0, expected later this year, likely
will support clustering, hot-pluggable PCI cards and Fibre Channel connections.


Network storage. Data storage promises to become an exciting topic in the next
year. As drives grow larger and larger, they also keep accessing data faster. Large
storage systems suffer from a throughput limit, however.


Current SCSI technology tops out at a 40-megabyte/sec sustained transfer rate with a
theoretical limit of 80 megabytes/sec. Although SCSI is certainly not dead yet, it's
hampered by a parallel-bus architecture and by cable lengths.


Fibre Channel, a serial-bus architecture originally developed for mainframes, now is
joining itself to certain improvements in SCSI. Adaptec Inc. of Milpitas, Calif., for
example, is working on Ultra3 SCSI, which would have a 160-megabyte/sec burst rate.


Because SCSI storage is in more than 90 percent of networks, it will stay around for a
while. But storage vendors know they cannot afford to keep SCSI roadblocks in the way of
greedy applications and ballooning data stores.


Fibre Channel works for distances up to 30 meters on copper wiring and 10 kilometers
over fiber-optic cable. It boasts sustained data transfer rates of around 100
megabytes/sec. And it can have 128 device addresses, as opposed to SCSI's 16-device limit.


Fibre Channel advances could pave the way for storage area networks that separate
storage subsystems from network servers. Multiple servers could share data storage, and
administrators could add more capacity without interrupting the operations of application
servers.


Fibre Channel is still a work in progress, however. Initial outlay is high, but
long-term costs will fall because of lower maintenance and management costs.


NOSes. The choice of an OS can make or break a server. The current
candidates--Unix, Novell NetWare or Windows NT--each have strengths and weaknesses.


No new contender is on the horizon. New releases of NetWare and NT will go head to head
for the foreseeable future.


Unix is the NOS of choice for enterprise applications. It has managed large amounts of
addressable memory, clustering and multiple processors for years. Likewise, the most
powerful transaction-processing servers in benchmark tests invariably are Unix boxes.


Administrative costs are steep, however. Unix definitely will not disappear, but its
market share will shrink in the midrange market.


The Unix world's conflicting factions and management issues certainly leave the door
wide open to a newcomer such as 64-bit Windows running on the Merced chip. But that
platform will not arrive until next year and will need time to settle down after release.


NT 4.0 offers great price-performance for low-end application servers but simply is not
the right environment for large-scale applications. NT 5.0, scheduled for release next
year, will do what it does better with additional midrange features.


Active Directory Service, for example, will make NT 5.0 much easier to administer on
large networks. Most of NT 5.0's features already exist in NetWare and Unix, and in a way
you could view NT as a synthesis of the two. Although Microsoft Corp. hopes to position NT
as the magical silver-bullet NOS, it's difficult to be all things to all people.


Novell Inc. will drop the IntranetWare nomenclature with its midyear release of NetWare
5. NetWare has been a consistently excellent file-and-print server, but Novell is aiming
at the application market now.


NetWare 5 will support robust storage systems, SMP multitasking kernels and virtual
memory. Novell also is incorporating Fibre Channel, I2O and clustering for as many as 16
servers.


The Novell Directory Service already is stable and mature, but it too will receive
enhancements. So NetWare 5 could leapfrog over NT in areas such as clustering.


Clusters. Clustering storage systems and servers is a hot topic as the demands
on application servers rise. Sysadmins clamor for two big advances: reliability and
load-balancing.


Clustering options available now and on the way were the subject of a previous GCN Lab
tutorial [GCN, March 16, Page 31].


Clustering, always weak in NT and NetWare, will improve in their next releases. NT
5.0's Microsoft Cluster Server, initially code-named Wolfpack, can support a two-server
cluster. NetWare 5's Orion is supposed to cluster 16 servers. On paper, Novell's Orion
looks superior.


A PC server has come a long way from a modified desktop system with a SCSI card and a
network connection. Server vendors have borrowed from mainframes and Unix as they engineer
continuous performance improvements in their hardware.


Ordinary PCs now have more computing power than users need for common office
applications. Servers, on the other hand, still have a lot of room to expand.


Tomorrow's networks will have ever faster, ever more numerous servers. Coupled with
ever-growing network complexity, they will impose heavier burdens on system and network
administrators.


So, despite all the hardware and software advances, networks are still short on the
human and technological management structures that keep things running smoothly at the
enterprise level .


inside gcn

  • cloud environment

    Microsoft brings Azure Stack to Government Cloud

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above