Using Continuous Integration and Continuous Deployment with Microservices

Faster, independent deployments are one of the key benefits microservices claim to offer, but how do you make them deliver on their promise?

What is Continuous Integration?

Microservices architectures offer a number of benefits over a traditional monolithic design. Constructing a system out of loosely coupled services enables teams to work independently and means the individual services can be deployed and scaled independently. As a result, the system is more resilient and robust, hardware is used more efficiently, and changes are delivered more quickly. So far so good, but how do you actually make that a reality? The answer lies – at least in part – in your process.

Before we get into the details, let’s step back and look at the bigger picture. The goal of any software development process is not (or at least shouldn’t be) the perfect adoption and execution of the process itself. The aim is usually to deliver value to users by meeting a need or solving a problem, ideally in a way that delights them so that they’ll choose your product over the competition’s. Many organizations, from small start-ups to huge multinationals, have taken an agile, iterative approach to developing software precisely because it helps them deliver value to users more quickly. Adopting a microservices architecture contributes to this goal by making it possible to deliver changes more quickly, which in turn means you can collect feedback, see what works or doesn’t, and tweak, adjust or even pivot in response. By using these short iterations effectively, you can deliver a valuable product to your users.

Continuous integration (CI), delivery (CD) and deployment (also CD) have been advocated for years as a method and mindset for speeding up the release of software. The approach isn’t specific to microservices – it’s also used to deliver monolithic systems – but it is an essential component in developing a microservices-based application. Without an automated CI/CD process, building and releasing the services slows down as teams struggle to ensure the application works as a whole and manual processes delay deployments, undermining one of the main benefits of a microservice architecture. Automated CI/CD needs to be combined with a testing strategy that includes both automated tests and monitoring in production, and works best when organizations adopt a DevOps culture.

The Importance of DevOps

A DevOps culture means that rather than handing off work from the development team to operations, each team takes responsibility for the whole lifecycle of its service. Breaking down the silos between development and operations means that developers get a better understanding of the infrastructure and process involved in releasing their service, while operations better understand the functionality of the whole system. By characterizing the release process as an engineering problem, infrastructure is treated as code and the process is optimized and automated. In adopting a DevOps culture, it’s important to avoid falling into the trap of creating a dedicated DevOps team to take care of managing deployment infrastructure. Doing that just creates another silo, and your organization will miss out on the full benefit.

Continuous Integration in Microservices

Continuous integration has its origins in XP (eXtreme Programming) and aims to minimize merge conflicts and speed up integration of code changes with a “little and often” approach. In a team practicing CI, all members commit their changes to trunk or the master branch on a dedicated repository on a regular basis (at least once a day). Each commit to master automatically triggers a build and a set of tests to be run.

Although moving to a CI process requires an investment to set it up, that effort is well spent. By committing, building and testing often, your team will soon iron out the issues in the process so that they are no longer daunting tasks that are left to the end of a project, where they add an unpredictable amount of time before the product can be released.

Source control

Using some form of source control version system is essential for a CI flow. If you’re using Git, you’ll need to nominate a central repository that developers will push their changes to for CI purposes. The changes are usually held in a dedicated staging area which triggers builds, with the actual commit to master only taking place once everything has passed successfully, to avoid others on the team pulling changes that turn out to break the build.

Master must always be shippable

A key element of continuous integration is adopting the mindset that trunk or the “central” master branch should always be ready to ship. In most cases, that doesn’t mean it will be deployed to production without further testing but that the builds should succeed (where applicable) and tests pass so that the code can be pushed through the deployment pipeline towards release at any time.

If the build breaks or the tests fail, the team’s number one priority should be to fix it. Being disciplined about this makes it much easier for problems to be addressed before they take hold. Just like bugs in code, it’s easier to swat one or two when you see them than to leave it a while and deal with an infestation later.

Testing the build

There are various CI tools available that take care of triggering builds, running tests and providing feedback via alerts and dashboards. When configuring your CI system, you need to strike a balance between checking that everything still works after each commit and providing timely feedback to developers. If running your automated tests takes over an hour, then by the time the build breaks and the dashboard lights up with failing tests, the developer who made the changes will already have moved on to something else. To really feel the benefit, the system needs to deliver feedback in the time it takes to get a cup of coffee. That way, if something goes wrong you can jump in to fix it without delay. A fast turnaround time also helps incentivize the team to commit often. A common approach is to run a nightly job that runs the full test suite and apply a more limited (and ideally targeted) set of tests after each commit.

When designing your automated test coverage, keep in mind the test pyramid as a way of distributing your tests. The further down the pyramid you can push your tests, the earlier you can get feedback, which makes it easier to address any bugs. You can use test doubles and contract tests to test functionality that depends on other parts of the system. As you move up the pyramid, you’ll need to run tests that involve multiple services. This is where a deployment pipeline becomes invaluable and we’ll discuss this in more detail below.

Feature flags

Although the focus of CI is on committing to trunk or pushing to a “central” master, that doesn’t mean developers should not work on branches. The emphasis is on avoiding long-lived branches, where code remains potentially unseen for days or weeks and which can result in “merge hell” as developers try to unpick dependencies. However, this does raise a potential problem: do you really want to be merging code to trunk for a feature that’s not ready for release? Probably not. The solution is to use feature flags to control visibility of incomplete functionality or manage delivery of features with a time-sensitive release date. Feature flags (aka feature toggles) should be managed via a configuration file and removed once the feature is released to production to avoid a buildup of technical debt.

One repo to rule them all?

Although continuous integration has been a recognized best practice for some time, in a microservices context, it raises an interesting question. Should each microservice have its own dedicated repository, or should all microservices within an application be contained in a single, monolithic repository? There isn’t a clear winner on this. While separate repos help to enforce a loosely coupled system and make ownership of individual services clear, they can also make it harder to share code and enforce standards, while making it much more difficult for any one person to understand how the whole system fits together. On the other hand, a single repo makes standardization, reuse and discoverability easier but risks introducing more complexity and greater potential for merge conflicts.

Where the balance lies will depend on your organization. It may be that you start with separate repos when first adopting microservices in order to enforce a decoupled model and move to a single repo later if the need for standardization and reuse demands it.

Continuous Delivery in Microservices

While CI focuses on regular commits that trigger builds, continuous delivery is about automatically moving those builds through the deployment pipeline in order to test them ready for release to production. The number of steps in your pipeline depends on the level of testing you want to perform before changes are released to production. You may have multiple steps to cover integration tests, component tests, end-to-end tests, UAT, load testing and staging. In this case, the latter stages of the pipeline involve creating environments with the latest stable or released versions of the other microservices that make up the application. This can carry a considerable overhead. Planning your environments and drawing up release policies to determine whether and when to use such stages will help you design the right pipeline for your organization’s needs.

Moving up the pyramid

In a CD pipeline, the build only moves on to the next step if all the tests on the current stage have passed successfully. If a test fails, progress is stopped and the teams are notified. Once a fix is committed, the whole process starts again. The tests run at each step in the process should roughly align with your test pyramid, starting with the lowest level tests first and progressively moving up to the more time-consuming and expensive tests as confidence in the build increases. That said, it can be useful to include a couple of functional tests early on in the pipeline to act as a sanity check before too much time is invested in thorough testing.

Regardless of the length of the pipeline, it’s essential that the same build should move all the way through the pipeline. Using the same build artifact means you can rely on the tests that have been performed in earlier stages of the pipeline; creating a new build for each environment introduces variability and the potential for bugs that would have been caught earlier if the build had gone through those steps.

Configuration and Automation

In order to use the same build for every environment, any environment-specific variables need to be moved out of your codebase into configuration files. This includes configuration of feature flags, which you may want to enable for some test environments in order to verify the behavior of the work in progress but disable in staging to ensure they work as intended prior to deployment to production.

Using configuration files is a necessary ingredient to automating the progression of your builds through the pipeline. By removing any need for manual intervention, you also remove the possibility of mistakes creeping in, giving you a repeatable, reliable process that you can trust.

Scaling pipelines

One of the key benefits of microservices is the fact they can be deployed independently. That means the release of a new feature for one microservice won’t be delayed by a bug fix or update to another service. Unlike a monolithic design, you never have to queue up behind a lengthy release train. While a monolithic deployment can often cause delays in deployment, having one pipeline per application makes it relatively straightforward to manage. By contrast, when you move to a microservices architecture, the number of pipelines to manage multiplies with the number of services. Given that it’s not unreasonable to have dozens or even hundreds of microservices for a single application, that’s a lot of pipelines!

In an organization with a DevOps culture where each team is responsible for both the development and the deployment of their microservice, the design and maintenance of the CD pipeline can be left to each team to manage. The downside of this approach is that it can result in considerable duplication of effort across teams and make it difficult to ensure all services go through the same quality checks. An alternative is to create and manage a standard pipeline that is used by all microservices in parallel, keeping the benefits of independent deployability while ensuring governance of the release process in a scalable way. The latter approach relies on containers.


In addition to independent scalability and deployability, microservices also give teams more independence, including the option to choose the language and framework best suited to the particular problems they’re trying to solve. However, when it comes to creating a single pipeline model for all services in a system, dealing with multiple services in different languages only adds to the complexity. Containers provide an elegant solution; they package up software and abstract away the complexity of their contents, much like containers on a cargo ship. By building a CD pipeline that handles containers you avoid the need to engage with the details of what is inside each one. Every container looks the same, so you can apply the same pipeline to each of them and ensure a consistent level of testing is applied based on your organization’s release policies.

Continuous Deployment in Microservices

With continuous delivery, the builds move automatically through the pipeline up to the penultimate stage, but the final push to production remains manual. Continuous deployment takes this one stage further and pushes code to production on the strength of it having passed all the previous stages, with no manual intervention at all. Not all organizations aim to reach this final stage; they may have good reasons for wanting to keep the decision on whether to push code live a manual one. On the other hand, if you’ve created a reliable, fully automated pipeline that your organization has confidence in, continuous deployment may be the logical next step.

Continuous Improvement in Microservices

Even with sophisticated CD pipelines and many layers of testing, it’s unlikely that nothing will ever go wrong in production. For the complex systems that microservices are well suited to, it’s impossible to test all possible combinations of circumstances that could arise in a live system. By having a reliable and automated process for delivering changes to production, you can react quickly when something breaks.

In order to address issues in production quickly, you need to proactively monitor your system for issues, including metrics that signal an imminent failure. Implementing distributed tracing facilitates tracking down the specific cause of a problem so you can get a fix out more quickly. If you’re running tests in production, such as canary releases or chaos engineering, some form of monitoring is also essential for observing the experiment and reacting to the results.

With a monolithic architecture, if something goes wrong in production, the usual response is to roll the changes back. While this is also an option with microservices, having a fast, dependable deployment process means it’s also possible to get a fix out quickly and roll forward. This option is particularly attractive if the last release involved database schema changes, making roll-back a more complex process.

Delivering Value

The benefits of a fast, automated continuous integration and deployment process for microservices are not limited to reacting to failures and fixing bugs quickly. Getting working software in front of users makes it much easier to get their feedback and observe how they behave in the real world. By being able to deploy services independently, teams can deliver value to users by making regular, incremental improvements based on real usage.

The premise behind continuous integration and regular releases is known as “shift left:” moving difficult, painful, time-consuming tasks to an earlier stage in your workflow makes you confront them sooner and more often, which means you’ll get better at them. By adopting a DevOps culture where teams are responsible for delivery as well as development, you can bring the benefits of agile development practices to integration, testing and deployment, and optimize your process in order to deliver a better product.