Microservices are an architectural and organisational approach to software development designed to speed up deployment cycles, foster innovation and ownership, and improve the maintainability and scalability of software applications. In a microservices architecture, the application consists of small independent services that communicate over well-defined APIs. These services are owned by small self-contained teams.
In this post, I will briefly explain the challenges you may encounter during microservices architecture adoption and the AWS services that address them.
This article aims to help you get started with AWS’ Amazon Elastic Container Service (Amazon ECS). I will share my experience in providing fully scalable and well-designed workloads.
The Challenges of Microservices
There are a lot of concepts behind microservices architecture but the most fundamental are:
- Decentralisation – architectures are distributed systems with decentralised data management.
- Independence – a microservice can be individually changed, upgraded or replaced and this should not affect other microservices.
- Do one thing well – each microservice is focused on a specific domain, if it’s too complex it should be divided into smaller ones.
- Polyglot – a microservice is heterogeneous to OS, a programming language or a data store.
- Black Box – a microservice hides its complexity from other microservices.
- You build it, you run it – following DevOps principles, the team is responsible for building, operating and maintaining individual services.
All of the abovementioned characteristics of microservices architectures make it possible to challenge monolithic limitations with agility and scalability. However, microservices adoption can be challenging as well. For instance, distributed systems create problems for beginners who didn’t think about latency, bandwidth, reliability, cascading failures, service discovery, and versioning. Since microservices architecture consists of many services, how can you be sure that Service A interacts with Service B’s latest version?
There are many other challenges you may face during the microservices architecture adoption journey. In the following sections of this article, I will discuss how AWS addresses these challenges by offering the Amazon ECS managed service.
What is the Amazon Elastic Container Service?
Amazon ECS is a fully managed container orchestration service. It is a secure, scalable, reliable and cost-optimised service which can be integrated with many other AWS services like Amazon Routes 53, AWS Identity and Access Management, or Secrets Manager. Amazon ECS makes it easy to deploy, manage, and scale Docker containers which run applications, services, and batch processes. Additionally, your ECS services can be hosted on serverless infrastructure (Fargate launch type) or virtual machines (EC2 launch type).
How does the Amazon Elastic Cloud Compute work?
To deploy applications on Amazon ECS, application components must be architected and implemented to run in containers. A Docker container is a standardised unit of software development. To read more about Docker click here. Container images are then stored in a registry from which they can be downloaded and run on Amazon ECS. To fully understand Amazon ECS’ structure, it’s important to first learn a few definitions.
- Task Definition – this is the most important part of Amazon ECS; it’s a text file in the JSON format, which describes your application. Task definition consists of a containers description (up to ten). Usually a task definition constitutes a single container definition in the microservices architecture. However, it can also be treated as a microservices unit and then go on to include more containers, (which make up individual microservices).
- Tasks and Scheduling – Tasks are an instantiation of task definitions. If you know the Object-Oriented Programming principles, you may notice certain similarities. The Amazon ECS scheduler is responsible for placing tasks in Amazon ECS, precisely on the ECS Cluster; (I cover ECS Cluster in greater detail below).
- Service – Amazon ECS allows you to simultaneously run and maintain a specified number of tasks (task definition instances) on the cluster. This is the Amazon ECS service. If a task fails, the service scheduler will launch a new instance of your task definition and replace the failed one. The service maintains a desired number of tasks based on the chosen scheduling strategy.
- Cluster – this is the logical grouping of resources that the tasks run on. If you choose the Fargate launch type, all resources will be managed by Amazon ECS. Hovewer, if your launch type is the EC2, the Cluster will need to manage a group of container instaces.
- Container Agent – in the EC2 launch type, it runs on each infrastructure resource within the ECS Cluster. The container agent is a kind of proxy between tasks that reside on the EC2 resource and Amazon ECS. It starts and stops tasks whenever it receives a request from Amazon ECS.
- Elastic Container Registry – this is an AWS registry for Docker containers.
Additionally, we may want to use the Application Load Balancer to run the ECS service, behind which traffic is distributed across tasks associated with the service. The diagram below shows how Amazon ECS orchestrates components between the Docker registry, the programmer, and the end user.
Simple Microservices Architecture on AWS
Let’s assume that you are facing a serious microservices architecture design challenge for one of your workloads. You’re asked to plan and design a workload for an online shop that needs to be reliable, secure, and scalable depending on the data load.
The first aspect you should figure out is what the application’s current architecture looks like. Assuming you managed to successfully decouple your application into domain-driven microservices, this is when we start employing Amazon ECS. We have a few Docker containers which make up the application. Some of them are more latency-prone, so let’s focus on them for a while. As this is a Distributed System, we need be sure that whenever there’s a bandwidth peak or one of our microservices goes down, Amazon ECS will automatically react to maintain application health and reliability (without delays).
- We should put the Application Load Balancer in front. This part of our architecture will work out of the box to distribute traffic evenly. The only thing we need to think about is routing. The most common type of routing is path-based – it forwards traffic to the proper microservice based on the received API path. You can find more about AWS ALB listeners (responsible for routing) here.
- The Application Load Balancer routes traffic to the target groups – in our case, it’s an Auto Scaling group (group of EC2 resources that container agents reside on). The Auto Scaling group will be able to address our bandwidth peaks challenge by scaling up and down, as needed.
- At this point, the infrastructure is built and we’re ready to design the Amazon ECS service. We create a task definition using the URL of our Docker container. The glue between the Application Load Balancer and the task definition is the service. Each service is associated with a target group. This simplifies the flow below.
- Our architecture solves the cascade failures challenge in two ways. Firstly, Amazon ECS checks if the tasks are up and running; if not, it replaces a failed task with a new instance to meet the desired amount of tasks within the service. Secondly, on the infrastructure level, target groups health checks are conducted on the provided microservice path.
- Whenever a new version of task definition appears, task definition’s version is upgraded, all running tasks are collapsed, and new instances are deployed based on the latest version. This is a rolling update type of deployment. Another type of deployment that the ECS task offers is blue/green.
To summarise, Amazon ECS enables you to quickly run, maintain, and scale our microservices architecture. Most of the challenges of microservices architecture adoption are solved by ECS orchestration. With Amazon ECS, you can easily build your microservices workloads at a low cost (in terms of both CAPEX and OPEX). Amazon ECS also provides flexibility when it comes to the infrastructure; depending on the type of workload, it is possible to choose between serverless Fargate or EC2.