Cisco stackable switches can pile up savings

Cisco stackable switches can pile up savings

By Michael Cheek and

John Breeden II

GCN Staff





Cisco's GigaStack GBIC handled small packets faster and large packets slower than an industry-standard fiber GBIC.


Fast-growing networks have made stackable switches a necessity, and now Cisco Systems Inc. has come up with an inexpensive way to stack them.

The company's Catalyst 3500XL family of 10/100-Mbps switches can use Cisco's own copper gigabit interface converter (GBIC) between 12- or 24-port units at about half the price of industry-standard fiber converters.



In GCN Lab tests, Cisco's GigaStack copper GBIC consistently handled slightly more common network traffic than an industry-standard 1000Base-SX optical-fiber GBIC, which the Catalyst switches also support. A 1000Base-SX connector transmits short-wave laser pulses over multimode optical fiber.

On a highly congested network, a few more packets got through Cisco's proprietary GigaStack GBIC. We found only a 1 percent difference between the two GBICs. But consider this: That 1 percent translates to about 70,000 64-byte packets over 60 seconds when the stack's link pushes data at 100 percent capacity; those packets would otherwise be lost.





Box Score'''''''''

Catalyst 3500XL

with GigaStack

12- or 24-port 10/100-Mbps stackable switch with gigabit interface converters



Cisco Systems Inc.;

San Jose, Calif.;

tel. 800-553-6387

www.cisco.com

Prices: Catalyst 3524XL $3,995 and Catalyst 3512 $2,995, both less $250 rebate; GigaStack GBIC with half-meter cable $250



+ GigaStack's cheaper than fiber GBIC and performs slightly better


+ Easy stack manageability via single-IP-address Web console


+ Excellent standalone performance



The lab used the SmartBits 2000 communications tester from Netcom Systems Inc. of Calabasas, Calif., to measure performance of the Cisco switches. SmartBits 2000 calculates throughput, latency and packet loss of the Layer 2 switches at multiple stress levels on a virtual network.

Cisco provided GCN with 12- and 24-port models of the Catalyst 3500XL, as well as GigaStack and fiber links. We tested different configurations and found all performed at expected capacities.


Catalyst 3500XL switches have two slots that can accommodate GigaStack or fiber GBIC modules. Up to eight Catalyst 3500XL switches can be linked via GigaStack.

The switches automatically detect one another, simplifying network expansion no matter which GBIC type you choose. This made testing easy, and it would be a distinct advantage for administrators who need to add network switch capacity quickly.


The switches were bundled with Web tools for managing a stack. We had to use Cisco's powerful but complex character-based interface only once to set an IP address on one switch. Then, with a Web browser, we connected to that switch and could manage all other Catalyst 3500XL switches in the stack. Up to eight of the switches are manageable via a single IP address.

Each GigaStack module is priced at $250, with a half-meter-long cable. That means it would cost $500 to connect two Catalyst switches. Fiber GBICs typically cost about $500 each, and the fiber cabling is sold separately. So it would cost more than $1,000 to link two switches with fiber.

GigaStack cables, however, are limited to 1-meter lengths, a disadvantage compared with higher grades of fiber, which can extend more than 100 kilometers. In a network that stretches across town or even bridges two floors, fiber would be the better medium. You can realize the economies of GigaStack connections only within a wiring closet or rack.

We ran tests at 100 percent capacity, sending signals from 10 Fast Ethernet ports on a Catalyst 3524XL to 10 Fast Ethernet ports on a Catalyst 3212XL. The GigaStack link reached 99 percent throughput for 64-byte packets; the fiber GBIC achieved 98 percent.

A 64-byte packet is the smallest possible packet under Ethernet and Fast Ethernet standards, and a 1,518-byte packet is the largest. The lab tested seven sizes; both GBICs sent 100 percent of packets larger than 64 bytes.

Smaller packets tend to tax a network more than larger ones. Because they are more numerous, they can overburden internal buffers.







How the GCN Lab tested


  • The lab used a SmartBits 2000 test unit to measure the Layer 2 communications performance of Cisco Catalyst switches.

  • All switches and SmartBits unit ports were set to 100 Mbps and full duplex.

  • The lab configured SmartBits 2000's Version 6.60 firmware with 24 ML-7710 cards, which use RJ-45 plugs. Standard 2-foot Category 5 cable connected the SmartBits unit to the switches.

  • Netcom Systems' SmartApplications
    2.22 software controlled the SmartBits 2000 unit during the tests.

  • The test covered seven Ethernet User
    Datagram Protocol packet sizes: 64, 128, 256, 512, 1,024, 1,280 and 1,518 bytes. The UDP packet type is specified by the Network Working Group's Request for Comments 1242, the standards document for measuring performance of Ethernet and Fast Ethernet devices.

  • All tests ran for 60 seconds in three
    configurations: standalone, 10-to-10 port and 12-to-12 port.

  • The standalone configuration examined the performance of the 24-port Catalyst 3524XL switch.


  • The 10-to-10 tests used the same Cat-
    alyst 3524 plus the 12-port Catalyst 3512XL. Using either GigaStack or fiber GBICs, the lab sent signals from the 3524 to the 3512. Only 10 ports were in use because the theoretical limit of the 100-Mbps ports would fill the gigabit bandwidth to 100 percent.

  • The 12-to-12 configuration was similar, except that the lab staff deliberately overloaded the GBICs with two additional 100-Mbps data streams.


When we hammered the gigabit links by overloading at 120 percent of capacity, both GBICs let 80 percent of the packets through. The tests indicated about 16.7 percent of all packets were lost, no matter what size, implying that the stack could compensate somewhat for the overload.

In testing latency, we noticed slight performance differences between the GigaStack and the fiber GBIC. Latency measures how long a switch takes to send a packet. Even a switch with 100 percent throughput can slow down overall network performance if high latency causes it to take too long to route packets to the right destinations.

At packet sizes from 64 bytes to 256 bytes, the GigaStack was slightly slower than fiber. Between 512 bytes and 1,518 bytes, it performed slightly faster. Neither GBIC excelled overall at low latency.

The Catalyst 3524XL in a standalone configuration doing store-and-forward transmission had overall latency of about 11 microseconds for all packet sizes. Under ideal conditions, the minimum latency would be 11 microseconds. Because packets must travel through two switches and the GBIC, we found both switches' overall latency acceptable.

To and fro

Switches send packets by one of two methods: store-and-forward or cut-through. The store-and-forward method, which Cisco's switches use, waits until an entire packet has been received before it sends anything on. Cut-through passes signals on immediately, not waiting for the end of the packet to arrive.

Cisco's eight-port GigaStack switch can interconnect eight Catalyst 3500XLs using Cisco GBICs. The Catalyst 3508XL has the advantage of a 5-Gbps connection rather than 1-Gbps in a point-to-point configuration.

In standalone tests of the Catalyst 3524XL, we slammed all 24 ports with traffic and found 100 percent throughput without packet loss.

The Catalyst switches are strong performers and easy to manage. The low cost of the GigaStack modules makes Cisco's stackable switches attractive for rapid on-site expandability requiring minimal human intervention.

inside gcn

  • machine learning

    Mitigating the risks of military AI

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above