NetworkTigers discusses the role of network load balancers (NLB) in network performance.
Whether your network’s traffic ebbs and flows or consistently pushes high numbers, ensuring that your servers are up for the task and won’t become overwhelmed is a crucial resource allocation challenge. Load balancers are designed specifically to address this concern.
What do load balancers do?
Network load balancers distribute network traffic across a group of backend servers, sometimes called a server farm or server pool. In doing so, they prevent any one server from getting bogged down with activity, such as traffic spikes that can cause network disruption or slowdown. If a server goes offline or is otherwise unable to perform, the load balancer will allocate the traffic that would have otherwise gone to it to the other servers within the pool.
Load balancer types
Hardware Load Balancers
Hardware load balancers are physical devices. While they can operate under demanding conditions and distribute large traffic volumes as needed, they’re expensive and lack flexibility. They also require more hands-on maintenance by an experienced IT professional, making them costly to maintain in the long run.
Hardware load balancers are best for scenarios in which heavy traffic will be encountered regularly, consistently and won’t need to be regularly reconfigured or modified.
Software Load Balancers
Software load balancers are more affordable than hardware options. Software load balancers offer more flexibility when scaling your network, whether open-source or commercial. They can be deployed in a broader range of applications more easily and based in the cloud to keep things light and agile.
Software load balancing solutions, however, don’t perform as well as hardware options under extreme demands.
Load balancer algorithms
Configuring a load balancer for your network requires careful consideration of the algorithm it employs. These algorithms, of which there are many, determine how the load balancer moves traffic across the server pool. There are two main categories of load balancer algorithms:
- Dynamic algorithms inspect the conditions currently being experienced by each server, determine where traffic should go based on said inspection, and distribute activity accordingly. They work intelligently to make decisions that best facilitate network performance.
- Static algorithms do not adjust traffic allocation, simply sending traffic across the server farm based on predetermined rules.
The most common algorithm for network load balancers
The round-robinin method
This static algorithm rotates server prioritization by sending traffic to the first appliance available and then moving that server to the back of the line. This process forms a sequential loop. A failed server is taken out of rotation. This process works best when it encompasses servers with identical capabilities and configurations.
The least connection method
This dynamic algorithm directs traffic to the server in the pool with the least active connections. This algorithm best serves those who employ many persistent connections that are spread unevenly between servers.
The least response time method
This dynamic algorithm sends Traffic to the server with the least active connections and the lowest response time.
The IP hash method
This static algorithm directs traffic to specific servers based on the IP address of the client.
Load balancer pros
Less downtime
A server crash can spell doom, mainly when it results from what would otherwise be a beneficial spike in network activity. Many small businesses have experienced a pause in their ability to serve their clients upon going viral or from simply being unprepared for a sudden influx of interest in their product or service.
Because a network load balancer will direct traffic in such a way as to prevent failure, operational downtime can be minimized and your network can roll with the punches. Load balancers prevent systems from having all their eggs in one basket, thus reducing the anxiety inherent to having a single server dropping out of commission paralyze your entire network.
Easier scalability
Load balancers allow you to add more servers to your farm as needed to keep pace with expansion. The deeper your server pool, the thinner you can spread your traffic, thus keeping demands at a minimum as you grow.
Better network performance
The resource allocation and optimization that load balancers offer keep your network moving steadily. Bottlenecks can be eliminated and applications, platforms, and websites can be used and visited without hiccups, as the load balancer prevents traffic jams.
Disaster recovery and network reliability
If a server crashes or otherwise negatively impacts network traffic, your load balancer can take it out of the equation and compensate for its absence by using the resources available from the other servers in the pool.
Cyberattack protection
Cybercriminals have become ever more adept at causing network disruption via distributed denial-of-service (DDoS) attacks. These attacks push tremendous amounts of junk traffic to an organization’s network to overwhelm and prevent it from working. Made up of botnets that can trap thousands of machines infected with malware, DDoS attacks are as common as they are effective and can be spearheaded by even the most inexperienced criminals.
Load balancers can absorb the impact of a DDoS attack by efficiently putting your server farm to work. Traffic will be spread out, thus softening the blow of the attack.
However, the server pool available must still be deep enough to take the hit. Depending on the resources at your disposal and the size of the attack, damage can still be done.
Load balancer cons
Configuration can be a challenge
Ensuring that your load balancer is set up to meet your needs requires expert knowledge of your network’s traffic demands and the resources available to meet those demands. An improperly configured load balancer will do a poor job at traffic allocation at best and work against your network’s efficiency at worst.
Additionally, load balances need to be revisited when changes are made in the network, which can further complicate certain modifications, upgrades, or expansions.
Load balancers can be costly
Hardware load balancers are expensive, and the maintenance and support costs required to keep them working at peak efficiency add up over time.
It can be a new failure point
A network load balancer’s job is to prevent your server from becoming a single point of failure that could destabilize or destroy your operations if affected by an attack or malfunction. However, your load balancer can also be a single point of failure. No matter the breadth of your server pool, an offline load balancer will not allow you to access it efficiently.
Redundancy is key. An additional load balancer is required to maintain the health of the other and kick into action should it fail. This redundancy adds security but also contributes to the already high price of load balancer implementation by doubling it.