Architecture Case Study: Complex Data-Tier Systems within Multi-Tier Web Architecture

Architecture Case Study: Complex Data-Tier Systems within Multi-Tier Web Architecture

Table of contents

No heading

No headings in the article.

Today, we are going to solve a technology company's data-tier multi-tier architecture problems. This company is one of the newest and fastest growing technology companies of the year. It is a USA based company with more than 5,000 employees, among which 1,500 remote employees access the company data centers via a Virtual Private Network (VPN).

The company hosts its main data center in Madison, Wisconsin, USA, privately connected to 200 offices within 100 miles. Miami, Florida, USA, also has a data center privately connected to 500 offices within 250 miles.

This company runs its supply chain application on 200 AMD Apex 128 cores, 4TB of RAM servers in Raid 5 in Madison at 84% capacity and does not want to refactor the software. They depend on this supply chain to process orders so it is a critical application. It also runs its website, apps, and database servers on 1500 AMD Apex 128 cores, 4TB of RAM servers, running 24 hours at 80% capacity split among its 2 data centers of Madison and Miami. The company has mentioned that they are interested in upgrading their data tier to handle more traffic. They want to improve their read and write traffic, while also having the ability to query and/or analyze their database data. This company also wants to automate the data analysis processes because they are a small company.

The company cannot tolerate a breach of its system. This company is a $20 billion business, with a 14% year growth, which can grow up to 21% with an optimized supply chain, improved data analytics and database performance, plus new customer intimacy initiatives. The company wants an architecture that will improve its business performance and data-tier processes. They are seeking data-tier focused architecture that focuses on decoupling their data architecture.

Company Present Architecture

Madison's data center has a connection to the Miami data center via 2 10 gigabit direct connections. The routing protocol used between these data centers is the open shortest path first (OSPF) of Area type 0, which operates within a single autonomous system (AS).

Screenshot 2022-04-13 11.39.36 AM.png

This company leverages the top six internet providers on its internet-facing routers to allow the supply chain application better performance and high availability. Border Gateway Protocol (BGP) is an exterior gateway protocol. It loads share traffic and could be considered the GPS of IP routing because it dynamically learns other routes. The Interior Border Gateway Protocol (IBGP) runs internally to allow internet service providers traffic through internal routers. The Exterior Border Gateway Protocol (EBGP) handles the external connections between external entities, like connecting to a data center to the internet.

Behind the routers, firewalls protect the company network. The VPN concentrator sits in a demilitarized zone to handle all IPSec connections from remote employees and put them behind the firewalls to access the company's internal system.

Screenshot 2022-04-13 11.39.55 AM.png

In terms of security, the company uses a CISCO firewall as its first layer of defense to keep all the bad guys out. Behind it, there is an IDS/IPS CISCO for intrusion detection and intrusion prevention system. They have an access control list on the routers, 802.1Q VLAN tagging for MAC address authentication, and a host-based firewall on the servers. Microsoft Active Directory is used to store information about users on the network and any encryption is done with AES-256 to ensure data security.

Screenshot 2022-04-13 11.41.15 AM.png

Regarding their 3-tier application architecture, the company uses network load balancers to distribute the traffic to its web servers. This allows for high availability of the web servers since load balancers conduct periodic health checks. After that they have a group of application load balancers to share traffic intelligently to its app servers. Their MySQL database is hosted on one AMD Apex 128 cores server and crashes regularly. All mounted in raid 5 to provide fast reads because of striping.

Screenshot 2022-04-13 11.41.27 AM.png

The supply chain software runs in the Miami data center. It is fronted with network load balancers that can support millions of requests and It is backed by a MySQL database. However, their supply chain application often makes requests for the same product for customers but the database it pulls from loses orders during peak hours.

Screenshot 2022-04-13 11.41.38 AM.png

Let's now implement the new architecture to better ABC Technology network.

Company New Architecture

After evaluating the company's systems, I've found that the best way to solve their network architecture is to implement a multi-cloud solution, which refers to utilizing multiple public or private cloud providers in conjunction with a data center. I will use the AWS cloud as the primary cloud to leverage the quality of the infrastructure. I will create copies of the data that needs to be in the cloud and keep the physical data center's data intact. This is so if both clouds were to fail, the company can utilize its physical data center for a while until the outage clears up. AWS will be the layout for what the solution should look like on any cloud. I will use Azure as the backup cloud with an active/passive disaster recovery model, which gives us a recovery time objective (RTO) of less than 15 minutes. I will re-purpose the servers that are freed up during the migration to store latency sensitive data or maintain backups for later recovery.

The internet connectivity from the present architecture isn't perfect and may or may not need some modifications. I will provision four more connections to the Internet for both data centers due to the supply chain's dependency on the Internet.

Screenshot 2022-04-13 11.41.44 AM.png

I understand the company's problem with a single cloud provider dependency. I will keep the data centers connectivity of the present architecture and connect the data centers to the cloud. I will synchronize connect them to AWS and Azure create backup copies of their critical application to avoid that. If anything goes wrong with AWS, Microsoft Azure will become the central cloud within 15 minutes.

Screenshot 2022-04-13 11.41.51 AM.png

I will move the supply chain to the cloud because it is the most straightforward solution. By doing this, I will place the application in an auto-scaling group with both cloud providers (AWS and Azure) without refactoring their software. That will buy the company the 14% growth for at least three years. They won't have to worry about their supply chain being unavailable to serve new customers and will still benefit from all the infrastructure already in place at no additional charges. Migrating the supply chain to the cloud will allow it to scale as needed and eliminate server capacity issues. I will also implement a queuing system for the database, to address the issue of losing order during peak hours. In AWS I will use SQS to store orders and improve the write load as well as add AWS Elasticache so the supply chain application can serve the most commonly purchased items without having to go the the database every time. This will drastically improve speed and performance. Finally, I will provision Amazon Aurora as the new data source for the supply chain application for several reasons. Aurora is very compatible with MySQL so the migration will be elegant, Aurora is able to scale up to 15 read replicas very quickly, Aurora has the ability to perform continuous backup and restores with a very low recovery point objective (RPO), and Aurora will automatically failover to the secondary database if there's an issue.

Screenshot 2022-04-13 7.28.48 PM.png

The security architecture in the cloud will be as follows: AWS Advanced Shield for DDoS protection. I will go to the marketplace to get a performance firewall and IDS/IPS system. I will keep the network access control lists as a layer of security for my subnets, and add the anti-virus/anti-malware on all my servers. I will then disable any unnecessary services on top of utilizing AWS's security group to individually protect my virtual machines (EC2). The AWS key management service (KMS) will manage data encryption, and IAM will manage authentication, authorization, and keep track of users' activities. The AWS Managed Microsoft AD will manage users both on the cloud and on-premises and give them that extra layer of protection on top of user name and password through Multi-Factor Authentication (MFA).

Screenshot 2022-04-13 11.42.03 AM.png

With the cloud 3-tier application architecture, I will use the elastic load balancers from AWS to handle millions of requests. I will provision Redshift to serve as the data lake for the whole data tier. Redshift will be responsible for data analytics, data querying, and can automation processes. I will also provision a content delivery network so the data analyzed to create new customer initiatives can be rolled out and deployed to users through the content delivery network. Finally, I will replace the RAID 5 configuration with RAID 1+0. That will give the company the best performance and highest availability.

Screenshot 2022-04-13 11.42.15 AM.png

The new architecture will allow this company to grow by 14% and reach its 21% forecast year-to-year growth.

NB: This architecture is a very high-level representation as the full one will be much more detailed and much more complex. The intended audience is the general public.

Thank you for your time, and I hope you enjoyed this architecture study case.

Dan, the Architect.

  • May the cloud be with you.