Skip to content

Practical Lessons Learned From Building a Microservices Platform

Technology

Apr 6, 2021 - 5 minutes read

1573 Blog Microservices Platform 416X300
Development Guild
See all Development's posts
1553 Devops Whitepaper News Section 416X300 EN

Share

As Objectivity, we work with multiple clients in many different domains. For one of our clients from the real estate industry, we built a greenfield microservices platform. Initially, we were supposed to implement eight services, but that number later grew to seventeen. The technologies that we used include:

  • Frontend: Angular, TypeScript, leaflet.js (for maps)
  • Backend: .NET Core, EF Core, Kubernetes cluster, Managed PostgreSQL, Azure Service Bus/RabbitMq

Let’s talk about the most interesting lessons we learned over the course of that project.

Design and Architecture

An event storming session was carried out at the beginning of the project. Despite the limited business knowledge and incomplete requirements, it resulted in a sufficient division of domain into microservices.

Most of the services were rooted in business and only two (notifications and data hub) were technically focused. This was a result of unclear vision e.g. on how to process the data collected by IoT sensors.

Because of GDPR and user anonymisation needs, we only stored user-related data in a single service. Other microservices refer to the user through an artificial key (UserId).

During this project, we reaffirmed our belief that the early creation of the API gateway is a worthwhile investment. Thanks to that, you can simplify development of the client’s application. In our case, we had a web UI and mobile app, as they don’t have to know exact split of functionalities between microservices and data interdependencies.

Versioning and Contracts

We didn’t version services or events exchanged between services separately. Our team was relatively small, it consisted of less than 10 developers, all working on almost the entire platform, so it was easier for us to make all changes at once and roll them out to the users. This approach was fast and fit for the project.

During the development phase, we experienced a lot of service contract changes. If there were more teams working on the platform, we would have spent a lot of time on communication and dividing overall backlog into the teams’ backlogs.

Backend

It’s important to always consider read operations in microservice architecture. Very often a single microservice contains just an ID instead of the full set of attributes for an object. Generally, there are two ways to solve that issue:

  • Copy object’s attributes between services — this can be the full set of attributes or just a subset required for the specific microservice. The main benefit of this approach is the scalability and resiliency of microservices — they all work independently. The drawback is that you have to update a copy on every source data change, this generates more work, plus there’s a synchronisation delay.

  • API gateway — a dedicated microservice that reaches to other services and combines data for the API end-user. The advantages include the lack of data duplication and less code overall, but this comes at the cost of decreased scalability and availability.

The third alternative could be to combine multiple services straight in the UI layer. However, this wouldn’t work for this project. We had multiple UIs (web and mobile), so this path would duplicate most of the join-related logic.

We used CQRS Light within a single API — this means that we had separate read and write pipelines, but only a single database. It’s quite a flexible approach that helps to avoid inflated domain classes as long as you stick to the separation rules. Furthermore, this method lets you to use lightweight ORM (like Dapper) for queries, and a standard ORM (Entity Framework) for commands.

Infrastructure

Our microservices platform was supposed to be cloud agnostic. As a result, we had to deal with different queues implementations (Azure Service Bus and RabbitMq). In such situations, it’s better to model cross-service communication through multiple queues from day one. This way, it’s easier to adapt to other queuing implementation. Unfortunately, we made the mistake of starting with one common queue on Azure Service Bus.

The separation of domain logic from infrastructure and integration code is a good and well-known practice, but it has to be done properly. If you don’t stay vigilant, you might accept too many shortcuts under the pressure of delivery. We recommend defining implementation agnostic interfaces and inversion of dependencies, so that the infrastructure code is not leaking into the domain code.

Logging and monitoring are a foundation of the distributed systems. In monoliths, you can introduce them over time, but in the case of microservices, the first bugs will show you how necessary they are.

The IoT integration went smoothly and quite fast. In fact, it took us more time to physically get to devices than to make the PoC integration.

We recommend one central Container Repository instead of multiple repositories on different clouds (we had Azure and IBM cloud available). This way, you avoid duplicating Docker images and generating unnecessary storage costs.

Migration to Service Mesh

At some point a strategic decision was made that we need to migrate towards Service Mesh, and Istio was selected as the implementation tool.

We found out that, the distributed system that was built on asynchronous communication and the queues between services, can be adopted quite easily to the request/reply-style communication needed for Service Mesh. This change obviously impacted the availability and performance that we discussed earlier. In a situation like this, it’s good to confirm that such a drastic change in quality attributes is acceptable and reasonable.

Summary

It should be obvious that it’s a good idea to avoid distributed systems whenever possible. No matter how difficult it is to discuss this with a client, we recommend trying to dissuade them from this idea. Microservices, as an example of those systems, are quite likely to become a source of problems you don’t want to solve, when you’re trying to launch the product to the market.

Even though you might be thinking about splitting the system in the future, you should consider modular monolith first. This way, you avoid overhead and unnecessary issues. It’s especially important at the beginning, when you explore the domain and the risk of mistakes is high. This is perhaps the greatest lesson we learned from that microservices project.

1553 Devops Whitepaper News Section 416X300 EN
Development Guild
See all Development's posts

Related posts

You might be also interested in

Contact

Start your project with Objectivity

CTA Pattern - Contact - Middle

We use necessary cookies for the functionality of our website, as well as optional cookies for analytic, performance and/or marketing purposes. Collecting and reporting information via optional cookies helps us improve our website and reach out to you with information regarding our organisaton or offer. To read more or decline the use of some cookies please see our Cookie Settings.