Scalable NT Servers

Scalable NT Servers

Pentium III Xeon multiprocessor servers offer a potent argument for Windows NT scalability, but managers must also consider performance, reliability and serviceability

By John H. Mayer
Special to GCN

It's unanimous. Ask information technology managers looking for new servers to run mission-critical applications what their No. 1 concern is, and their answer invariably is: scalability. Caught between the escalating demands of online transaction processing (OLTP) systems and the skyrocketing size of decision support, data warehousing and other database applications, IT managers want systems that won't leave them high and dry if they underestimate their processing needs.

Over the past few years, IT managers have learned that what works best for large enterprise applications is symmetric multiprocessing systems. Built around high-speed buses, specialized input/output architectures and large memory caches, SMP servers link multiple processors in a single chassis to supply a platform an IT manager is not likely to outgrow. Instead of running a compute-intensive application on one processor, SMP servers break up the task and distribute the workload across two, four or more processors.

The OS rules

Although SMP servers have been used extensively as back-office replacements for aging mainframes or for PC LAN consolidation, performance has been largely dictated by operating system design. Unix systems vendors, working with an operating system that has been fine-tuned over a couple of decades for high-performance applications, have coupled up to 32 processors in a single chassis. IT managers running Microsoft Windows NT, on the other hand, have been largely limited to four-processor configurations, held down by NT's inability to handle more than 4G of main memory and other architectural constraints.

That's about to change, however. Microsoft Windows 2000, due for delivery later this year, and the debut of a new multiprocessing architecture from Intel called Profusion promise to open up Windows NT users to a new level of multiprocessing performance. As the above chart shows, the latest versions of enterprise-level Intel servers, now running Pentium III Xeon processors, promise scalable options for even the most compute-intensive applications.

'We feel we have to dispel a widely accepted myth that NT doesn't scale,' said Tim Golden, director of enterprise server product marketing in the industry standard server division at Compaq Computer Corp. 'But until now we've never had the architecture under NT that would allow it to scale.'

This doesn't mean that every eight-way server running Windows NT will scale to the same degree. When looking at any multiprocessor system, it's important to first determine the degree of scalability you're buying. Once the operating system constraints are out of the way, the incremental performance gained with each additional CPU is largely dictated by an individual vendor's ability to optimize its hardware architecture.

Several features play into the ability of a system to offer fairly linear cost-per-performance benefits'that is, a proportional increase in computing power for each additional processor you pay for. A server can have a bevy of the fastest processors in the world, but if it doesn't have an I/O subsystem capable of feeding information to those processors fast enough, performance will suffer.

The application you're going to run on the system also matters. Systems that feature a limited number of connections to disk, for example, may present a significant bottleneck in applications, such as OLTP, that require large numbers of simultaneous disk accesses.

System bus width is also important. SMP systems generate a tremendous amount of CPU-to-memory traffic, and although the data-shared design of an SMP system makes it easier to program because the system can be treated as a single processor, it also forces each processor's memory cache to watch all the traffic flying across the bus.

As cache can

On every SMP system, a large percentage of bus transactions is devoted to copying data from cache to cache to keep each cache up-to-date to maintain data integrity. Eventually, as the number of processors increases, the system bus becomes saturated. Most Windows NT multiprocessor server vendors are offering a 64-bit PCI bus to accommodate that traffic.

Main memory support is another key performance consideration. Given Windows NT 4.0's inability to support memory configurations beyond 4G, all vendors support the maximum configuration. Moreover, virtually all Intel Xeon servers today use Intel's 450NX chip set, which only supports the use of slower EDO memory chips.

The introduction of the Profusion chip set in next-generation systems will allow vendors to move to faster 133-MHz synchronous dynamic RAM. And systems designed to support Windows 2000 will support memory configurations up to 8G.

Scalability and performance are important criteria, but there are more factors to consider in a server of this class. Reliability and availability features also set enterprise servers apart from low-end systems. Servers running mission-critical applications must stay up around the clock. Most users are willing to pay for high availability features because downtime leading to lost productivity and lost sales is potentially much more expensive.

Up, up and away

Many features will help keep a server running, and some are fairly simple and straightforward. Features such hot-pluggable power supplies, disk drives and fans help ensure that every time a system component goes down, all the system's users don't go down with it.

Recently, vendors have begun to extend that concept to other pieces of system hardware. New features such as PCI hot-plug technology offer the user the ability to hot-plug a PCI peripheral device, such as a failed Ethernet card or array controller, without having to bring the server down.

'Today in an NT environment you have to power down the server, replace the failed component, bring the failed server back up and reattach all your users,' Golden said. 'With PCI, hot-plug power is turned off only to the specific PCI slot affected, and everything else keeps right on running.'

To help find failed components before it is too late, vendors now offer extensive analysis on components inside the server to determine if they have a propensity to fail. Such monitoring systems can alert a customer that a disk drive, processor, power supply or fan is acting peculiar and may need replacement.

The final element to consider when it comes to system availability is clustering options. A high-availability server cluster links multiple systems so that when one fails, another acts as a backup so that users can continue working. The cluster requires sophisticated software to detect a failure, initiate failover or transfer procedures from one server to another and make sure data is constantly mirrored across all systems.

NT clustering is limited to two systems, based on Version 1 of Microsoft's Wolfpack. Most vendors admit that Microsoft still has a way to go to compare to the extensive capabilities offered by Unix vendors.

But recently Microsoft Corp. and IBM Corp. released software that extends the capability of Microsoft Cluster Server and improves failover procedures. And many leading server vendors offer new programs designed to guarantee 99.9 percent availability of servers running Windows NT.

Eventually, Version 2 of Wolfpack, due out after Windows 2000, will support clusters of up to 16 systems and, Microsoft officials said, will greatly simplify installation and setup.

Easy to fix

Serviceability is yet another feature worth reviewing when shopping for a scalable NT system. Although it's often overlooked, the ability to easily repair and work on a system can have a dramatic impact on downtime.

For example, if you plan to take advantage of hot-pluggable components, take a look at the server chassis to ensure it is easy to get to. Ideally, the chassis should be relatively simple to disassemble.

'It shouldn't require a 15-piece toolkit and three hours to replace a fan,' said Nik Simpson, server product marketing manager for Intergraph Corp.

Vendors have come a long way on this score. Many now use chassis designs that require no or few tools to get inside. And some are planning to go one step further with their next designs. In the server market today the pace of system upgrades has accelerated so fast that users are often left behind.

'We're having a hard time keeping up ourselves,' Compaq's Golden said. 'Look at the Xeon processors, for example. The 400-MHz processor has been out less than a year and it's spun three times, and another version is about to come out shortly. No customer has the time to put up with that, especially in a Y2K year.'

Some vendors, such as Compaq, plan to make their server chassis designs more modular. Compaq's next eight-processor design, for instance, will house key architectural components, such as the processor boards, I/O subsystem, power supplies and disk storage, in different drawers. So if a faster processor comes along, a user will be able to simply swap the processor drawer to upgrade the system.

Additionally, more users are opting for external disk storage over internal drives to increase system flexibility and simplify upgrades. As the chart illustrates, server vendors generally offer a variety of rackmountable disk array options, including Fibre Channel systems for high-speed access.

The final issue is value. Prices for NT SMP servers vary widely, from a few thousand dollars for a basic system to tens of thousands of dollars for a fully loaded four-way system.

The eight-way systems initially will come at a premium, and low-end two-processor systems are already feeling the competitive heat of the commodity-priced desktop PC market. Moreover, add-ons such as large disk arrays can add substantially to the final price.

When considering these servers, decisions don't have to be final. Buy what you need, but remember: These systems are designed to be scalable.
John H. Mayer writes about information technology from Belmont, Mass.


  • Records management: Look beyond the NARA mandates

    Pandemic tests electronic records management

    Between the rush enable more virtual collaboration, stalled digitization of archived records and managing records that reside in datasets, records management executives are sorting through new challenges.

  • boy learning at home (Travelpixs/

    Tucson’s community wireless bridges the digital divide

    The city built cell sites at government-owned facilities such as fire departments and libraries that were already connected to Tucson’s existing fiber backbone.

Stay Connected