Bring your Monolith to Microservices through Service-Based Architecture

Gayan Fonseka
6 min readFeb 9, 2020

Recently I had to answer a few questions about migrating from a quite slow and hard to maintain monolith to a microservice-based architecture. That discussion could have been as productive as I wanted had I seen the application and the codebase/database beforehand. But I thought that I could share some general guidelines on this for anybody planning to take that move.

Understanding the Outcome

To provide a brief about the Service based architecture, the diagram will look very similar to the Microservices based architecture except for the fact that client requests are coming into the user interface layer instead of the API layer. Other than that, we have separately deployable services that are coarse-grained and a database context that is shared by all services.

Service-Based Architecture

Why would this be a good approach for your journey from a Monolith to a Microservices based setup and how can we do it?

At a high level, why this will be a good approach and a practical one is because we’ll be taking a hybrid from microservices by varying three things. We will be varying the service granularity (yes, dozen to half a dozen services, instead of hundreds), then we’ll also be varying the database scope, and last but not least the deployment pipeline to a level manageable without complete automation. If you take this approach there is a high possibility that you might as well sort out most of your concerns even before you go full-blown microservices. I have done this practice and it has worked out well, and we stopped short of full-blown microservices. Going out of topic, you may even try this for your new business application and it has many advantages such as not having to deal with sagas and etc.

Talking about service granularity, microservices will have hundreds or more fine-grained, single-purpose services that do one thing really well. What is the easiest and particle approach? Breaking your application into several hundred fine-grained services or breaking an application into several portions. I think you’ll agree that finding seams (Working Effectively with Legacy Code by Michael Feathers) through major sections of your application and splitting those up into separately deployed units is practical. Based on case studies, even if the team is familiar with the business domain it is not easy to get all the bounded contexts right in one go to get the microservices right and on top of that, you’ll have the challenge of technology adoption with that team. If the team is very caple of and hands-on with all the technologies which are changing every week, then the domain will be a challenge that can be addressed perfectly with these macro services, also known as mesoservices (Latin word for middle, between micro and monolith) approach. Even though I’ll not be going into details on benefits and trade-offs, some of the key benefits are overall performance improvement since there is only one latency point to get to the service and making business function changes becoming easy since you only have to deal with a service or two instead of hundreds are some highlights. In terms of the trade-offs, service deployment will take a lot longer to coordinate as it is not just one isolated bounded context but a service with many functionalities. Due to the same reasons, a lot of various types of testing will need to happen.

Now let’s look at the database scope. We can’t have a single database and have microservices on top of that, as one breaking change can affect a few hundred services and the entire system will be down. You might wonder if this does not happen with macro services, yes it will but technically it will be easier to coordinate and test with dozen services as opposed to a few hundred or thousand services. With the same setup, you can start identifying bounded contexts and actually create a new instance of that database or a new schema and do a simple refactoring of a table move and start creating bounded contexts within these macroservices in a way that service ends up with its own data. Again performance and feasibility are the keys in this approach towards the microservices journey. Performance is clear, have you thought about the feasibility that this approach brings? How practical would it be to immediately, try and decouple the database and tear it apart into multiple hundreds or thousands of schemas, I surely hope you understand that benefit. By now we may have an architecture as follows,

Revised Service-Based architecture

Looking at the third aspect deployment pipeline, we see major benefits especially if this is your first project towards microservices. Microservices requires operational automation in order to manage hundreds to thousands of services simultaneously, which is full-blown DevOps. That is an organizational change for which you might not be ready. But with mesoservices, you can start with basic automation that you already practice ( using the same tools) which is a great advantage. If you aren’t then this is a good starting point to try Ansible or something similar.

Over time you can bring your system something similar to the one shown below. You can see that we’ve identified an area of the application to be fully converted into a microservices setup (on the left-hand side). We have also, separated the single UI layer into multiple micoro-frontends. By this time you will be having a decent DevOps process going on to test, deploy, release and monitor the system.

Revised Services Based Architecture

How to Achieve

When I said, converting your monolith into microservices, I did not specifically mention earlier that you will be doing this while your monolith was still up and running and that is a phased-out approach, but that was the whole intention and not a rewrite from scratch of mesoservices. Then how can you achieve this?

Strangler Fig Application

Martin Fowler had mentioned this approach for a rewrite of an application and we could use the same. In the context of software, this method says, that the new system will be supported by wrapping the existing system. Old and new systems coexist giving the new system time to grow and potentially replacing the old system entirely. The benefit of using this pattern is that it supports incremental migration and gives us the ability to pause and even stop, while still taking advantage of the new system.

As you can see, we have taken out a selected set of functionalities as a service and the monolith is also running. Then we redirect the specific calls to the new mesoservice instead of the monolith. At the same time monolith will be carrying out the remaining functionalities. The redirecting of the calls can be done using an HTTP proxy that is configurable. You may use a proxy such as NGINX which supports a multitude of redirection mechanisms and that will perform well. You could also make a differentiation between deployments and release with canary deployments and make the features available as required.

All this time we’ve been focusing on the server and not much on migrating the frontend. UI composition is a pattern that can be applied to this. Even though I am not hoping to elaborate on the techniques, micro frontends, page composition, and widget composition are some of the techniques that can be used. Sam Newman has explained these in detail in his latest book.

There are many other patterns and practices that can be used to successfully achieve the migration journey. I’ll try to write about those techniques as time permits. Hope this helped you get an understanding of the journey and wish you will be successful in it.

--

--