Welcome to Dealzclick

An Introduction to Load Balancing

[ad_1]

Traffic Means Business

You want your company to be popular. You want to be #trending. Today, it’s a part of doing business.

However, trending means traffic and traffic means a heavy load on your servers.

  1. Can your servers—your site—handle viral marketing campaigns and social media campaigns where incoming end users can spike dramatically?
  2. Can you host a live stream or media event without having to worry about slowdowns or (we shudder even thinking about it) a total systems failure?

One way you can make sure that you’re ready for whatever comes your way (well, your site’s way, anyway) is to have a load balancer in place. A load balancer uses a series of algorithms to evenly distribute your end users across multiple instances—across multiple servers—of your website, ensuring consistent performance and preventing crashes. Also acting as an automatic failover device, the load balancer is an essential component when it comes to your infrastructure.

Why is Load Balancing Important to You

As of Friday, April 12, 2019, at 12:09 p.m. (thanks internetlivestats.com) there are 4.2 billion internet users, worldwide. Since January first of this year, they’ve sent 74 trillion tweets, 25.5 quadrillion emails, and have made 646.8 trillion Google searches.

Oh, and there are 2.5 billion active Facebook users as of 12:18 p.m.

Is your website ready for the potentiality that these numbers represent? With so many internet users and the ever-rising popularity (and ubiquity) of social media, a small nudge in the right direction could have a significant impact on your site traffic—and with an increase in traffic comes an increase in risk to your ecosystem.

You need a way to make sure everyone that visits your site does so in an orderly way; a way that doesn’t risk the performance or integrity of your servers. That’s what a load balancer does: it acts like an attended parking lot.

Check MarkSubscribe to the Liquid Web weekly newsletter to get the latest on high availability technology sent straight to your inbox.

Remember the last time you went to an event where you had to pay for parking? There was, most likely, a single entrance, a person taking money, and a person directing people into parking spots one by one, row by row. A load balancer does much the same thing—your website iterations (across multiple servers) is the parking lot, your end users are the cars, and your load balancer is the attendant. Take a minute and imagine what it would look like if the parking lot at the event had several entrances and no attendants. It would be complete chaos. (I can see it now; fist fights, fender benders, and an eventual, full-scale riot. The police would come, the event would get shut down, and no one gets to see whatever it is they were there to see in the first place.)

Okay, so maybe that’s a bit of a stretch, but without a load balancer, a spike in traffic can bring your website to a screeching halt. A screeching halt is bad for business. For every minute of IT downtime—website, servers, database, and the like—companies lose an average of $5,600 (thanks Gartrillioner, Inc.). That’s somewhere between $140,000 and $300,000 an hour depending on the size and model of your company. The modest investment it takes to put a load balancing solution in place pales in comparison to the losses your enterprise could take if your server(s) crash.

Your Company Will Benefit

According to the Aberdeen Group, the average business will experience 14.1 hours of IT downtime, annually—that 14.1 hours translates into 1.55 million in revenue. Revenue loss only increases as your company’s reliance on IT increases. For example, Dunn & Bradstreet estimate 6.4 million in losses per hour for the average online brokerage company.

Finally, if you consider that 81% of companies report that they can only shoulder 8.76 hours of downtime annually (this one’s from Information Technology and Intelligence Corp), it becomes abundantly clear how important uptime is to the overall health of your business and the businesses around you.

Regardless of the size of your enterprise, a load balancing solution will pay for itself.

Even a single, averted hour of downtime can be the difference between a good year and a bad year considering the fact that small businesses average only $390,000 in revenue a year (according to the U.S. Census 2014 Survey of Entrepreneurs).

In 2016, Medium put together a comprehensive report on eCommerce. This report made plain the impact a website outage—or even a slowdown—has on revenue. They even put the top 50 eCommerce websites (Ikea, Macy’s, Nike, etc.) through their paces, measuring connectivity around the clock, for a week straight. Given that eCommerce company websites, as Medium puts it, “…are not only an important source of information but the source of income for the companies themselves…” these numbers are pretty drastic. However, as connectivity and website speed and performance are increasingly integral to all enterprises, crashing under a heavy load is simply not good for business.

Here’s the skinny, according to Medium.

  • A whopping 73% of mobile internet users report coming across websites that were simply too slow to load, while 38% reported a 404.
  • Is your page not loading? If so, 90% of users will (if it’s an option) go to a competitor.
  • On average (over the 7 days Medium measured), uptime amongst the top 50 was only 99.03% (two 9s), somewhat below the industry’s recognized standard of 99.9% (three 9s) and well below the industry’s gold standard of 99.999% (five 9s).
  • Short, but frequent, outages—not prolonged downtime—were most common amongst the top 50 sites.

Obviously these numbers—both revenue earned and revenue lost as a result of downtime—are going to change depending on the size, shape, and model of your company. However, one thing is for sure: Your business is probably online which means you have a server, and any time those things go down you’re losing money. You don’t want to lose money.

how a load balancer works

How a Load Balancer Works

Ok, so, you definitely want a load balancer. But, even if you’re not designing, buying, and maintaining your own hardware and software it’s a good idea to know how your hosting service is implementing the technology. Why? So you can stay agile.

In most cases, you can work with your host to make changes (sometimes big, sometimes small) to your IT infrastructure to better suit your unique needs. Typically, hosts that provide load balancing will have options that you can choose from.

These options are primarily relegated to two categories:

  1. Algorithms and methods
  2. Hosting dedication

Algorithms & Methods

Load balancing works by employing an algorithm that determines the method by which site traffic is distributed between servers. The 9 algorithms and methods, below, represent the most common ways load balancing is done.

1. The Round Robin Method

The round robin method is perhaps the least complex of the balancing methods. Traffic is evenly distributed by simply forwarding requests to each server in the infrastructure in turn, one by one. When the algorithm has made it through the list of instances/servers in its entirety it goes back to the top of the list and begins again. For example, in a 3-server system, a request is made, the load balancer directs the request to server A, then B, then C, and then A again, so on and so forth. The round robin method is best applied in scenarios in which all the server hardware in the infrastructure is similarly capable (computing power and capacity).

2. The Least Connections Method

A default load balancing algorithm, the least connections method will assign incoming requests to the server with the least active connections. This is the default load balancing method as it will offer the best performance in most cases. The least connections method is best suited for situations in which server engagement time (the amount of time a connection stays active) is varied. In a round robin method, it is conceivable that one server could get overloaded—for example, if more connections are staying active for longer on server A than B, server A could come under strain. In the least connections method, this can’t happen.

3. Weighted Least Connections

Also available with the round robin method (it’s called the weighted round robin method, go figure), the weighted least connections algorithm allows for each server to be assigned a priority status. For example, if you have one server that has more capacity than another server you might more heavily weight the higher capacity server. This means that the algorithm would assign an incoming request to the more heavily weighted server in the case of a tie (or some other active connection metric), ensuring a reduced load on the server with less capacity.

4. Source IP Hash

When a load balancer uses a source IP hash, each request coming in from a unique IP is assigned a key and that key is assigned a server. This not only evenly distributes traffic across the infrastructure, but it also allows for server consistency in the case of a disconnection/reconnection. A unique IP, once assigned, will always connect to the same server. According to Citrix, “Caching requests reduces request and response latency, and ensures better resource (CPU) utilization, making caching popular on heavily used websites and application servers.”

5. URL Hash

Almost identical to the source IP hash method, the URL hash method assigns keys based on the requested IP, not the incoming IP.

6. The Least Response Time Method

Similar to the least connections method, the least response time method assigns requests based on both the number of connections on the server and the shortest average response time, thus reducing load by incorporating two layers of balancing.

7. The Bandwidth and Packets Method

A method of virtual server balancing, in the bandwidth and packets method the load balancer assigns request based on which server is dealing with the least amount of traffic (bandwidth).

8. Custom Load

A complex algorithm that requires a load monitor, the custom load method uses an array of server metrics (CPU usage, memory, and response time, among other things) to determine request assignments.

9. Least Pending Requests (LPR)

With the least pending requests method, HTTP/S requests are monitored and distributed to the most available server. The LPR method can simultaneously handle a surge of requests while monitoring the availability of each server making for even distribution across the infrastructure.

As you can see, there are a lot of solutions to the same issue. One of them is bound to be the solution for you and your company’s unique needs. If you aren’t sure what the best algorithm/solution for you is, you can always work with your hosting provider to help you make the call.

shared, dedicated and cloud load balancing at Liquid Web

What We Offer at Liquid Web

At Liquid Web, we offer shared or dedicated load balancers. Both options are fully managed. From design to implementation, administration, and monitoring, our network engineers will help make sure you are operating optimally.

Shared Load Balancers

Our managed shared load balancers—think many clients across a hardware/software/network infrastructure—are cost-effective, high performing, and easily scalable (additional web servers can be added to the existing pool of load balanced servers). You’ll have full redundancy with automatic failover built right in. A shared solution is perfect for sites that have gone beyond a single web server.

  • Managed Shared Load Balancers are economical plans that include a 1Gbps throughput, 100,000 concurrent sessions, 2-10 servers, and 1-10 virtual IPs

Managed Dedicated Load Balancers

At Liquid Web, our dedicated load balancers are exactly that, completely dedicated to your enterprise. A dedicated solution comes with all of the benefits of shared load balancing but also features advanced traffic scripting options, a complete API, high-performance SSL, and a full set of resources committed to your infrastructure 24/7/365. With dedicated hardware, you’re guaranteed high performance, low latency, and no bottlenecking.

Cloud Load Balancers

As more and more companies operate within (at least in part) a cloud environment, a balancing solution within the same environment—as best practice dictates—becomes necessary.

Say hello to cloud load balancers.

Just like their physical counterparts cloud load balancers distribute site traffic across redundant virtual nodes, ensuring uptime and mitigating performance issues as a result of high traffic. A distinct advantage of the cloud load balancer over physical appliances is the ease and cost-effectiveness of scaling up to meet demand. Simply put, it’s quicker and cheaper to scale up in a cloud environment. At Liquid Web, we’ve got you covered regardless of the environment.

Algorithms

We offer a variety of algorithms, including the round robin method, the least connect method, and the least response time method.

A Final Word About Load Balancing

So, no matter what your goal, if you’ve moved beyond a single web server (or are about to), you would benefit from a load balancer—it will keep your website and your data up, running, highly available, and performing at peak levels. Whether you’re going to implement it yourself or are looking for a managed system, you’ll be better equipped to make decisions that benefit your company if you have an understanding of your needs, your current systems, and where you want to ultimately get to. An HA system (of which load balancing is a part) has to be thought of as not simply improving uptime, but mitigating downtime, the death knell of a company in today’s always-on, 24/7/365 digital economy. With a load balancer solution in place (physical, virtual, or both), you’ll be on your way toward a lean, mean, HA machine.

Get the Ultimate High Availability Checklist For Your Website

eBook - High Availability Checklist

[ad_2]

Source link

Dealzclick
Logo
Reset Password
Compare items
  • Total (0)
Compare
0
Shopping cart