Architecture Case Study: The Load Balancer Sandwich

Architecture Case Study: The Load Balancer Sandwich

Table of contents

No heading

No headings in the article.

Today, we're going to walk through different complex networking requirements and 6 different ways you can arrange load balancers, sometimes referred as a load balancer sandwich.

What is a Load Balancer?

  • A load balancer is a network device that load shares traffic or tasks over a set of resources, and routes them accordingly. Typically, a server is the most common destination for load balancers to send their traffic to. There are two types of load balancers. A network load balancer operates on layer 4 of the OSI model and routes network traffic based on TCP and UDP headers in packets. When a network load balancer reads a packet header with TCP or UDP it will very quickly decide which network device to send it to. They are the fastest type of load balancer and are the most commonly used. An application load balancer however operates on layer 7 of the OSI model and routes application traffic based on URL paths via HTTP and HTTPS headers. These load balancers are intelligent but that comes at the price of speed. When a person has to calculate 2x2 versus 256,879/54 in their head, it very quickly dawns on them how much time it will take to solve a complex problem versus an easy one. Computer devices are the same way in which the more complicated the task, the longer it will take to complete. This is why an application load balancer is much slower than a network load balancer. These are some of the fundamentals that every cloud professional understands in order to create their own solutions.

Company Present Architecture

Screenshot 2022-04-14 9.24.52 AM.png

This organization is a software company that serves customers unique technical services to its customers. They fulfill orders through their supply chain application which runs on 300 AMD Apex 128 cores servers with 4TB of RAM and 250 of those servers process the orders. The organization faces the challenge of serving customers as fast as possible and maintaining order accuracy. They currently have an F5 network load balancer, backed by the app servers running the supply chain application.

Company New Architecture (Option A) NLB -> ALB -> Server

This arrangement is one of the most common. A network load balancer often serves as a frontend for application load balancers due to speed and accuracy. When the network load balancer gets traffic it quickly offloads it to the next hop which can be a server or another load balancer. Some applications or server processes require a specific URL path or orders will be misplaced. Network load balancers can't route based on URL, only TCP and UDP. At the same time application load balancers are very slow, but not because they want to be. The complex routing application load balancers do require extra time to compute which leads to performance issues. Combining them together brings the best of each and the perfect chemistry to solve a complex networking problem.

Screenshot 2022-04-14 9.25.29 AM.png

Company New Architecture (Option B) ALB -> NLB -> Server

This arrangement is another common one. An application load balancer can also serve as a frontend for network load balancers as well. When an application load balancer gets traffic it intelligently offloads it to the correct corresponding hop which can be a server or another load balancer. Some businesses have to maintain service level agreements, or SLA's which defines the agreement of service a business promises to deliver. That may include fast delivery or turnaround, and an application load balancer can't do the job. This is why we can use network load balancers to route based on TCP and UDP. At the same time users can put in unique requests and have them intelligently routed first before sending it a speedy delivery. Combining them in this arrangement brings out a different result than the previous option.

Screenshot 2022-04-14 9.25.58 AM.png

Company New Architecture (Option C) NLB -> NLB -> ALB -> Server

This arrangement is a common one in cloud environments. Any professional in the cloud industry will tell you that virtual network devices are not on par with physical network devices. A cloud based load balancer does not perform as well as a physical load balancer but it gets the job done. The problem is when you migrate to the cloud, most organizations still need that extremely fast routing performance. However, no one can bring a F5 load balancer into an AWS data center and plug it in. To work around this cloud professionals often will use a cloud native load balancer to serve as a connector between the physical on load balancers and the virtual one. This is used in organizations where scalability plus speed is most important, followed by accuracy. Combining them in this arrangement brings out a more salable result.

Screenshot 2022-04-14 9.26.41 AM.png

Company New Architecture (Option D) ALB -> ALB -> NLB -> Server

This arrangement is a less one in cloud environments. There are situations where an organization has a unique product, and users customize their options. This can very quickly lead to order complications if they aren't routed properly. Speed is not your friend when dealing with extremely sensitive data. A cloud professional would use this arrangement when the front end of a web service or application has to be intelligently routed multiple times before its final destination. Networking can get very complex when dealing with customization products because sometimes several groups of application load balancers may be needed to route the traffic accurately in multiple steps before reaching the final server. The final network load balancer will quickly route the remaining traffic to the next server. If arranged right, the last load balancer will add some speed between the application load balancers and the servers. This is used in organizations where accuracy and customer satisfaction is most important, followed by accuracy. Combining them in this arrangement brings out the most accurate results.

Screenshot 2022-04-14 9.26.59 AM.png

Company New Architecture (Option E) NLB -> ALB -> NLB -> Server

This arrangement is only a slight variation from the previous option and is less common but still widely used. The main differentiator is instead of an application load balancer being the frontend for the network now it is the network load balancer. This is possible if the web services deployed or applications housed don't require intelligent routing immediately. One could also use this arrangement when they want to use the best physical application and network load balancers in a cloud setting. They will utilize a cloud native network load balancer to connect it to their application load balancers to route intelligently and pass it on to a network load balancer to quickly route the traffic to its target and finalize the process. Combining them in this arrangement brings out unique cloud based results.

Screenshot 2022-04-14 9.27.21 AM.png

Company New Architecture (Option F) ALB -> NLB -> ALB -> Server

The last possible arrangement is again only a slight variation from the previous option. This is used in enterprise-wide environments that have hundreds upon thousands of systems. This is because just like how an architecture can be decoupled into 3 main tiers, the networking layers can be decoupled as well. One would use this arrangement when intelligent routing is needed first, followed up by very quickly routing it to the next hop. When dealing with complicated product processes it is best to decouple the architecture. When you decouple the architecture more complex processes can take place without confusion. An architect would see the first group of application load balancers and network load balancers as a whole tier, and the next group another tier. There can be up to 10 tiers to manage and with this arrangement it makes it much easier to deal with routing that is multi-factored and complex.

Screenshot 2022-04-14 9.27.45 AM.png

Thank you for your time, and I hope you enjoyed this architecture study case!

Dan, the Architect.

  • May the cloud be with you.