When NICs get together in traffic, they can act like a virtual LAN

When NICs get together in traffic, they can act like a virtual LAN

One of the ways server network interface cards maximize throughput is with a software feature called load balancing.

This algorithm-based technique optimizes the throughput of inbound and outbound traffic between servers and the network infrastructure by collecting multiple servers' NICs into a group. Each NIC within the group maintains its own Media Access Control address, but the group shares one IP address. That lets the group act as a virtual NIC that can also become part of a virtual LAN, or VLAN.

For outgoing traffic, an advanced algorithm typically distributes data from a server evenly across the group of NICs using parameters such as a destination address. When a client connects to the server, the algorithm determines which NIC the client will communicate with for the duration of that connection. If a NIC within the group is not being used for a connection, the algorithm compensates to ensure its use.

The same solution can also improve server reliability and availability when connection failures occur. In any load-balancing configuration, one NIC in the group typically is identified as the primary connection while the others are designated secondary connections.

If a cable becomes disconnected from one of the secondary NICs, the system automatically reassigns clients to the remaining NICs. If the primary NIC connection fails, the load-balancing software designates one of the secondary NICs to assume the responsibilities of the original primary NIC.

For traffic coming from the clients to the server, the load-balancing software typically uses a round-robin method to evenly distribute the load among the various server NICs in the group.

Each new connection between the server and client is assigned to a new server NIC within the group by means of a connection-based algorithm, with each server NIC being used on a rotating basis.

If a single client has multiple connections to the server, inbound traffic from each connection will shift in a round-robin fashion to the next NIC in the load-balancing group.

Load balancing has proved particularly important as network administrators have upgraded their desktop PCs to a switched 100-Mbps infrastructure. While the additional bandwidth has helped ease the flow of data from the PCs, it has also created new bottlenecks in the switch-to-server link by increasing demands on the server.

By spreading the load across server NICs, load balancing can alleviate this bottleneck and improve server response time, a key factor in end-user satisfaction with network services.

More the merrier

Server-to-network throughput increases with each additional load-balancing NIC, so load balancing offers network administrators excellent flexibility. They can implement the technology with as few as two NICs and, as network traffic increases at the server, add more server NICs to the load-balancing group to improve bandwidth.

There is no industry standard for load-balancing technology, but an effort is under way to create one. A number of vendors have joined to launch the Institute of Electrical and Electronics Engineers' 802.3ad Link Aggregation Task Force with the intent of defining a standard approach to the technology.

'John H. Mayer


  • Records management: Look beyond the NARA mandates

    Pandemic tests electronic records management

    Between the rush enable more virtual collaboration, stalled digitization of archived records and managing records that reside in datasets, records management executives are sorting through new challenges.

  • boy learning at home (Travelpixs/Shutterstock.com)

    Tucson’s community wireless bridges the digital divide

    The city built cell sites at government-owned facilities such as fire departments and libraries that were already connected to Tucson’s existing fiber backbone.

Stay Connected