By on December 17, 2019

Canary Deployment in 5 Minutes with Service Mesh

Welcome to our second hands-on Kuma guide! The first one walked you through securing your application with mTLS using Kuma. Today, this guide will walk you through Kuma’s new L4 traffic routing rules. These rules will allow you to easily implement blue/green deployments and canary releases. In summary, Kuma will now alleviate the stress of deploying new versions and/or features into your service mesh. Let’s take a glimpse at how to achieve it in our sample application:

Start Kubernetes and Marketplace Application

To start, you need a Kubernetes cluster with at least 4GB of memory. We’ve tested Kuma on Kubernetes v1.13.0 – v1.16.x, so use anything older than v1.13.0 with caution. In this tutorial, we’ll be using v1.15.4 on minikube, but feel free to run this in a cluster of your choice.

When running on Kubernetes, Kuma will store all of its state and configuration on the underlying Kubernetes API server, and therefore requiring no dependency to store the data. 

With your Kubernetes cluster up and running, we can throw up a demo application built for Kuma. Deploy the marketplace application by running:

This will deploy our demo marketplace application split across four pods. The first pod is an Elasticsearch service that stores all the items in our marketplace. The second pod is the Vue front-end application that will give us a visual page to interact with. The third pod is our Node API server, which is in charge of interacting with the two databases. Lastly, we have the Redis service that stores reviews for each item. Let’s check that the pods are up and running by checking the kuma-demo namespace:

With the application running, port-forward the sample application to access the front-end UI at http://localhost:8080:

Now that you can visualize the application, play around with it! This is what you just created:

The only difference is this diagram includes the v1 and v2 deployment of our back-end API. If you inspect our pods in kuma-demo namespace again, you will only find a lonely v0, but don’t worry, I included the deployments for v1 and v2 for you. Before we scale those deployments, let’s add Kuma.

Download Kuma

To start, we need to download the latest version of Kuma. You can find installation procedures for different platforms on our official documentation. The following guide is being created on macOS so it will be using the Darwin image:

Next, let’s unbundle the files to get the following components:

Lastly, go into the ./bin directory where the Kuma components will be:

Install Kuma

With Kuma downloaded, let’s utilize kumactl to install Kuma on our cluster. The kumactl executable is a very important component in your journey with Kuma, so be sure to read more about it here. Run the following command to install Kuma onto our Kubernetes cluster:

When deploying on Kubernetes, you are supposed to change the state of Kuma by leveraging Kuma’s CRDs. Therefore, we will now use kubectl to help us through the remaining demo. To start, let’s check the pods are up and running within the kuma-system namespace:

While running on Kubernetes, no external dependencies are required, since it leverages the underlying Kubernetes API server to store its configuration. However, as you can see above, a kuma-injector service will also start in order to automatically inject sidecar data plane proxies without human intervention. Data plane proxies are injected into namespaces that include the following label:

Now that our control plane and injector are running, let’s delete the existing kuma-demo pods so they restart. This will give the injector a chance to deploy those sidecar proxies among each pod. 

Check that the pods are up and running again with an additional container. The additional container is the Envoy sidecar proxy that Kuma is injecting into each pod.

Now if we port-forward our marketplace application again, I challenge you to spot the difference.

A-ha! Couldn’t find a thing, right? Well, that is because Kuma doesn’t require a change to your application’s code in order to be used. The only change is that Envoy now handles all the traffic between the services. Kuma implements a pragmatic approach that is very different from the first-generation control planes:

  • It runs with low operational overhead across all the organization
  • It supports every platform
  • It’s easy to use while relying on a solid networking foundation delivered by Envoy – and we see it in action right here!

Canary Deployment

With the mesh up and running, let’s start expanding our application with brand new features. Our current marketplace application has no sales. With the holiday season upon us, the engineering team worked hard to develop v1 and v2 version of the Kuma marketplace to support flash sales. The backend-v1 service will always have one item on sale, and the backend-v2 service will always have two items on sale. So to start, scale up the deployments of v1 and v2 like so:

and

Now if we check our pods again, you will see three backend services:

With the new versions up and running, use the new TrafficRoute policy to slowly roll out users to our flash-sale capability. This is also known as canary deployment: a pattern for rolling out new releases to a subset of users or servers. By deploying the change to a small subset of users, we can test its stability and make sure we don’t go broke by introducing too many sales at once.

First, define the following alias:

This alias will help send 100 requests from frontend-app to backend-api and count the number of special offers in the response. Then it will group the request by the number of special offers. Here is an example of the output before we start configuring our traffic-routing:

The traffic is equally distributed because have not set any traffic-routing. Let’s change that! Here is what we need to achieve:

We can achieve that with the following policy:

That is all that is necessary! With one simple policy and the weight you apply to each matching service, you can slowly roll out the v1 and v2 version of your application. Let’s run the benchmark alias one more time to see the TrafficRoute policy in action:

We do not see any results for two special offers because it is configured with a weight of 0. Once we’re comfortable with not going bankrupt with our rollout of v1, we can slowly apply weight to v2. You can also see the action live on the webpage. One last time, port-forward the application frontend like so:

Two out of roughly 10 requests to our webpage will have the sale feature enabled:

That’s all! This was a really quick run-through, so make sure you check out Kuma’s official webpage or repository to find out about more features. You can also join our Slack channel to chat with us live and meet the community! Lastly, sign up for the Kuma newsletter below to stay up-to-date as we push out more features that will make this the best service mesh solution for you.