• Explore the unified API Platform
        • BUILD APIs
        • Kong Insomnia
        • API Design
        • API Mocking
        • API Testing & Debugging
        • MCP Client
        • RUN APIs
        • API Gateway
        • Context Mesh
        • AI Gateway
        • Event Gateway
        • Kubernetes Operator
        • Service Mesh
        • Ingress Controller
        • Runtime Management
        • DISCOVER APIs
        • Developer Portal
        • Service Catalog
        • MCP Registry
        • GOVERN APIs
        • Metering & Billing
        • Analytics
        • APIOps & Automation
        • API Observability
        • Why Kong?
      • CLOUD
      • Cloud API Gateways
      • Need a self-hosted or hybrid option?
      • COMPARE
      • Considering AI Gateway alternatives?
      • Kong vs. Postman
      • Kong vs. MuleSoft
      • Kong vs. Apigee
      • Kong vs. IBM
      • GET STARTED
      • Sign Up for Kong Konnect
      • Documentation
  • Agents
      • FOR PLATFORM TEAMS
      • Developer Platform
      • Kubernetes & Microservices
      • Observability
      • Service Mesh Connectivity
      • Kafka Event Streaming
      • FOR EXECUTIVES
      • AI Connectivity
      • Open Banking
      • Legacy Migration
      • Platform Cost Reduction
      • Kafka Cost Optimization
      • API Monetization
      • AI Monetization
      • AI FinOps
      • FOR AI TEAMS
      • AI Cost Control
      • AI Governance
      • AI Integration
      • AI Security
      • Agentic Infrastructure
      • MCP Production
      • MCP Traffic Gateway
      • FOR DEVELOPERS
      • Mobile App API Development
      • GenAI App Development
      • API Gateway for Istio
      • Decentralized Load Balancing
      • BY INDUSTRY
      • Financial Services
      • Healthcare
      • Higher Education
      • Insurance
      • Manufacturing
      • Retail
      • Software & Technology
      • Transportation
      • See all Solutions
      • DOCUMENTATION
      • Kong Konnect
      • Kong Gateway
      • Kong Mesh
      • Kong AI Gateway
      • Kong Insomnia
      • Plugin Hub
      • EXPLORE
      • Blog
      • Learning Center
      • eBooks
      • Reports
      • Demos
      • Customer Stories
      • Videos
      • EVENTS
      • AI + API Summit
      • Webinars
      • User Calls
      • Workshops
      • Meetups
      • See All Events
      • FOR DEVELOPERS
      • Get Started
      • Community
      • Certification
      • Training
      • COMPANY
      • About Us
      • Why Kong?
      • We're Hiring!
      • Press Room
      • Investors
      • Contact Us
      • PARTNER
      • Kong Partner Program
      • SECURITY
      • Trust and Compliance
      • SUPPORT
      • Enterprise Support Portal
      • Professional Services
      • Documentation
      • Press Releases

        Kong Names Bruce Felt as Chief Financial Officer

        Read More
  • Pricing
  • Login
  • Get a Demo
  • Start for Free
Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Kubernetes Canary Deployment in 5 Minutes
Engineering
December 17, 2019
5 min read

Kubernetes Canary Deployment in 5 Minutes

Kevin Chen

Welcome to our second hands-on Kuma guide! The first one walked you through securing your application with mTLS using Kuma. Today, this guide will walk you through Kuma's new L4 traffic routing rules. These rules will allow you to easily implement blue/green and Kubernetes canary deployments. In summary, Kuma will now alleviate the stress of deploying new versions and/or features into your service mesh. Let's take a glimpse at how to achieve it in our sample application:

kubernetes canary deployment diagram

Start Kubernetes and Marketplace Application

To start, you need a Kubernetes cluster with at least 4GB of memory. We’ve tested Kuma on Kubernetes v1.13.0 – v1.16.x, so use anything older than v1.13.0 with caution. In this tutorial, we’ll be using v1.15.4 on minikube, but feel free to run this in a cluster of your choice.

$ minikube start --cpus 2 --memory 4096 --kubernetes-version v1.15.4
😄  minikube v1.4.0 on Darwin 10.14.6
🔥  Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.15.4 on Docker 18.09.9 ...
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"

When running on Kubernetes, Kuma will store all of its state and configuration on the underlying Kubernetes API server, and therefore requiring no dependency to store the data.

With your Kubernetes cluster up and running, we can throw up a demo application built for Kuma. Deploy the marketplace application by running:

$ kubectl apply -f http://bit.ly/kuma101
namespace/kuma-demo created
serviceaccount/elasticsearch created
service/elasticsearch created
replicationcontroller/es created
deployment.apps/redis-master created
service/redis created
service/backend created
deployment.apps/kuma-demo-backend-v0 created
deployment.apps/kuma-demo-backend-v1 created
deployment.apps/kuma-demo-backend-v2 created
configmap/demo-app-config created
service/frontend created
deployment.apps/kuma-demo-app created

This will deploy our demo marketplace application split across four pods. The first pod is an Elasticsearch service that stores all the items in our marketplace. The second pod is the Vue front-end application that will give us a visual page to interact with. The third pod is our Node API server, which is in charge of interacting with the two databases. Lastly, we have the Redis service that stores reviews for each item. Let's check that the pods are up and running by checking the `kuma-demo` namespace:

$ kubectl get pods -n kuma-demo
NAME                                       READY    STATUS      RESTARTS      AGE
es-87mgm                                   1/1      Running        0          91s
kuma-demo-app-7f799bbfdf-7bk2x             2/2      Running        0          91s
kuma-demo-backend-v0-6548b88bf8-46z6n      1/1      Running        0          91s
redis-master-6d4cf995c5-d4kc6              1/1      Running        0          91s

With the application running, port-forward the sample application to access the front-end UI at http://localhost:8080:

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Now that you can visualize the application, play around with it! This is what you just created:

The only difference is this diagram includes the v1 and v2 deployment of our back-end API. If you inspect our pods in `kuma-demo` namespace again, you will only find a lonely v0, but don't worry, I included the deployments for v1 and v2 for you. Before we scale those deployments, let's add Kuma.

Download Kuma

To start, we need to download the latest version of Kuma. You can find installation procedures for different platforms on our official documentation. The following guide is being created on macOS so it will be using the Darwin image:

$ wget https://kong.bintray.com/kuma/kuma-0.3.0-darwin-amd64.tar.gz
--2019-12-09 11:25:49--  https://kong.bintray.com/kuma/kuma-0.3.0-darwin-amd64.tar.gz
Resolving kong.bintray.com (kong.bintray.com)... 54.149.67.138, 34.215.12.119
Connecting to kong.bintray.com (kong.bintray.com)|54.149.67.138|:443... connected.
HTTP request sent, awaiting response... 302
Location: https://akamai.bintray.com/3a/3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1?__gda__=exp=1575920269~hmac=0d7c9af597660ab1036b3d50bef98fc68dfa0b832e2005d25e1628ae92c6621e&response-content-disposition=attachment%3Bfilename%3D%22kuma-0.3.0-darwin-amd64.tar.gz%22&response-content-type=application%2Fgzip&requestInfo=U2FsdGVkX1-CBTtYNUxxbm2yT4muZ0ig1ICnD2XOqJI7BobZ4DB_RouzRRsn3NBrSFjF_IqjN9wzbGk28ZcFS_mD79NCyZ0V0XxawLL8UvY5D8h-QQdfKTeRUpLUqOKI&response-X-Checksum-Sha1=6df196169311c66a544eccfdd73931b6f3b83593&response-X-Checksum-Sha2=3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1 [following]
--2019-12-09 11:25:49--  https://akamai.bintray.com/3a/3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1?__gda__=exp=1575920269~hmac=0d7c9af597660ab1036b3d50bef98fc68dfa0b832e2005d25e1628ae92c6621e&response-content-disposition=attachment%3Bfilename%3D%22kuma-0.3.0-darwin-amd64.tar.gz%22&response-content-type=application%2Fgzip&requestInfo=U2FsdGVkX1-CBTtYNUxxbm2yT4muZ0ig1ICnD2XOqJI7BobZ4DB_RouzRRsn3NBrSFjF_IqjN9wzbGk28ZcFS_mD79NCyZ0V0XxawLL8UvY5D8h-QQdfKTeRUpLUqOKI&response-X-Checksum-Sha1=6df196169311c66a544eccfdd73931b6f3b83593&response-X-Checksum-Sha2=3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1
Resolving akamai.bintray.com (akamai.bintray.com)... 184.27.29.177
Connecting to akamai.bintray.com (akamai.bintray.com)|184.27.29.177|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 38017379 (36M) [application/gzip]
Saving to: ‘kuma-0.3.0-darwin-amd64.tar.gz'

kuma-0.3.0-darwin-amd64.tar.gz      100%[================================================================>]  36.26M  4.38MB/s    in 8.8s

2019-12-09 11:25:59 (4.13 MB/s) - ‘kuma-0.3.0-darwin-amd64.tar.gz' saved [38017379/38017379]

Next, let's unbundle the files to get the following components:

$ tar xvzf kuma-0.3.0-darwin-amd64.tar.gz
x ./
x ./conf/
x ./conf/kuma-cp.conf
x ./bin/
x ./bin/kuma-tcp-echo
x ./bin/kuma-dp
x ./bin/kumactl
x ./bin/kuma-cp
x ./bin/envoy
x ./NOTICE
x ./README
x ./LICENSE

Lastly, go into the ./bin directory where the Kuma components will be:

$ cd bin && ls
envoy   kuma-cp   kuma-dp   kuma-tcp-echo kumactl

Install Kuma

With Kuma downloaded, let's utilize `kumactl` to install Kuma on our cluster. The `kumactl` executable is a very important component in your journey with Kuma, so be sure to read more about it here. Run the following command to install Kuma onto our Kubernetes cluster:

$ ./kumactl install control-plane | kubectl apply -f -
namespace/kuma-system created
secret/kuma-admission-server-tls-cert created
secret/kuma-injector-tls-cert created
secret/kuma-sds-tls-cert created
configmap/kuma-control-plane-config created
configmap/kuma-injector-config created
serviceaccount/kuma-control-plane created
customresourcedefinition.apiextensions.k8s.io/dataplaneinsights.kuma.io created
customresourcedefinition.apiextensions.k8s.io/dataplanes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/meshes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/proxytemplates.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficlogs.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficpermissions.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficroutes.kuma.io created
clusterrole.rbac.authorization.k8s.io/kuma:control-plane created
clusterrolebinding.rbac.authorization.k8s.io/kuma:control-plane created
role.rbac.authorization.k8s.io/kuma:control-plane created
rolebinding.rbac.authorization.k8s.io/kuma:control-plane created
service/kuma-injector created
service/kuma-control-plane created
deployment.apps/kuma-control-plane created
deployment.apps/kuma-injector created
mutatingwebhookconfiguration.admissionregistration.k8s.io/kuma-admission-mutating-webhook-configuration created
mutatingwebhookconfiguration.admissionregistration.k8s.io/kuma-injector-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/kuma-validating-webhook-configuration created

When deploying on Kubernetes, you are supposed to change the state of Kuma by leveraging Kuma’s CRDs. Therefore, we will now use `kubectl` to help us through the remaining demo. To start, let's check the pods are up and running within the `kuma-system` namespace:

$ kubectl get pods -n kuma-system
NAME                                  READY   STATUS    RESTARTS   AGE
kuma-control-plane-7bcc56c869-lzw9t   1/1     Running   0          70s
kuma-injector-9c96cddc8-745r7         1/1     Running   0          70s

While running on Kubernetes, no external dependencies are required, since it leverages the underlying Kubernetes API server to store its configuration. However, as you can see above, a `kuma-injector` service will also start in order to automatically inject sidecar data plane proxies without human intervention. Data plane proxies are injected into namespaces that include the following label:

kuma.io/sidecar-injection: enabled

Now that our control plane and injector are running, let's delete the existing kuma-demo pods so they restart. This will give the injector a chance to deploy those sidecar proxies among each pod.

$ kubectl delete pods --all -n kuma-demo
pod "es-87mgm" deleted
pod "kuma-demo-app-7f799bbfdf-7bk2x" deleted
pod "kuma-demo-backend-v0-6548b88bf8-46z6n" deleted
pod "redis-master-6d4cf995c5-d4kc6" deleted

Check that the pods are up and running again with an additional container. The additional container is the Envoy sidecar proxy that Kuma is injecting into each pod.

$ kubectl get pods -n kuma-demo
NAME                                    READY    STATUS     RESTARTS    AGE
es-jxzfp                                2/2      Running    0           43s
kuma-demo-app-7f799bbfdf-p5gjq          3/3      Running    0           43s
kuma-demo-backend-v0-6548b88bf8-8sbzn   2/2      Running    0           43s
redis-master-6d4cf995c5-42hlc           2/2      Running    0           42s

Now if we port-forward our marketplace application again, I challenge you to spot the difference.

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

A-ha! Couldn't find a thing, right? Well, that is because Kuma doesn’t require a change to your application’s code in order to be used. The only change is that Envoy now handles all the traffic between the services. Kuma implements a pragmatic approach that is very different from the first-generation control planes:

  • It runs with low operational overhead across all the organization
  • It supports every platform
  • It’s easy to use while relying on a solid networking foundation delivered by Envoy - and we see it in action right here!

Kubernetes Canary Deployment

With the mesh up and running, let's start expanding our application with brand new features. Our current marketplace application has no sales. With the holiday season upon us, the engineering team worked hard to develop v1 and v2 version of the Kuma marketplace to support flash sales. The backend-v1 service will always have one item on sale, and the backend-v2 service will always have two items on sale. So to start, scale up the deployments of v1 and v2 like so:

$ kubectl scale deployment kuma-demo-backend-v1 -n kuma-demo --replicas=1
deployment.extensions/kuma-demo-backend-v1 scaled

and

$ kubectl scale deployment kuma-demo-backend-v2 -n kuma-demo --replicas=1
deployment.extensions/kuma-demo-backend-v2 scaled

Now if we check our pods again, you will see three backend services:

$ kubectl get pods -n kuma-demo
NAME                                       READY   STATUS      RESTARTS    AGE
es-jxzfp                                   2/2     Running      0          9m16s
kuma-demo-app-7f799bbfdf-p5gjq             3/3     Running      0          9m16s
kuma-demo-backend-v0-6548b88bf8-8sbzn      2/2     Running      0          9m16s
kuma-demo-backend-v1-894bcd4bc-p7xz8       2/2     Running      0          20s
kuma-demo-backend-v2-dffb4bffd-48z67       2/2     Running      0          11s
redis-master-6d4cf995c5-42hlc              2/2     Running      0          9m15s

With the new versions up and running, use the new `TrafficRoute` policy to slowly roll out users to our flash-sale capability. This is also known as Kubernetes canary deployment: a pattern for rolling out new releases to a subset of users or servers. By deploying the change to a small subset of users, we can test its stability and make sure we don't go broke by introducing too many sales at once.

First, define the following alias:

$ alias benchmark='echo "NUM_REQ NUM_SPECIAL_OFFERS"; kubectl -n kuma-demo exec $( kubectl -n kuma-demo get pods -l app=kuma-demo-frontend -o=jsonpath="{.items[0].metadata.name}" ) -c kuma-fe -- sh -c '"'"'for i in `seq 1 100`; do curl -s http://backend:3001/items?q | jq -c ".[] | select(._source.specialOffer == true)" | wc -l ; done | sort | uniq -c | sort -k2n'"'"''

This alias will help send 100 requests from `frontend-app` to `backend-api` and count the number of special offers in the response. Then it will group the request by the number of special offers. Here is an example of the output before we start configuring our traffic-routing:

$ benchmark
NUM_REQ    NUM_SPECIAL_OFFERS
34                     0
33                     1
33                     2

The traffic is equally distributed because have not set any traffic-routing. Let’s change that! Here is what we need to achieve:

Kubernetes Canary Deployment Diagram 2

We can achieve that with the following policy:

 cat <<EOF | kubectl apply -f -
apiVersion: kuma.io/v1alpha1
kind: TrafficRoute
metadata:
  name: frontend-to-backend
  namespace: kuma-demo
mesh: default
spec:
  sources:
  - match:
      service: frontend.kuma-demo.svc:80
  destinations:
  - match:
      service: backend.kuma-demo.svc:3001
  conf:
  # it is NOT a percentage. just a positive weight
  - weight: 80
    destination:
      service: backend.kuma-demo.svc:3001
      version: v0
  # we're NOT checking if total of all weights is 100
  - weight: 20
    destination:
      service: backend.kuma-demo.svc:3001
      version: v1
  # 0 means no traffic will be sent there
  - weight: 0
    destination:
      service: backend.kuma-demo.svc:3001
      version: v2
EOF

trafficroute.kuma.io/frontend-to-backend created

That is all that is necessary! With one simple policy and the weight you apply to each matching service, you can slowly roll out the v1 and v2 version of your application. Let's run the benchmark alias one more time to see the `TrafficRoute` policy in action:

$ benchmark
NUM_REQ    NUM_SPECIAL_OFFERS
83                     0
17                     1

We do not see any results for two special offers because it is configured with a weight of 0. Once we're comfortable with not going bankrupt with our rollout of v1, we can slowly apply weight to v2. You can also see the action live on the webpage. One last time, port-forward the application frontend like so:

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Two out of roughly 10 requests to our webpage will have the sale feature enabled:

That's all! This was a really quick run-through, so make sure you check out Kuma's official webpage or repository to find out about more features. You can also join our Slack channel to chat with us live and meet the community! Lastly, sign up for the Kuma newsletter below to stay up-to-date as we push out more features that will make this the best service mesh solution for you.

KubernetesDeploymentAPI Development

More on this topic

Videos

Local Kubernetes Development with Kong, minikube, and kind

Videos

Centralized Decentralization: Migration from Azure to Kong

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
KubernetesDeploymentAPI Development
Kevin Chen

Recommended posts

A Guide to Service Mesh Adoption and Implementation

EngineeringAugust 10, 2024

In the rapidly evolving world of microservices and cloud-native applications , service mesh has emerged as a critical tool for managing complex, distributed systems. As organizations increasingly adopt microservices architectures, they face new c

Kong

Scaling Kubernetes Deployments of Kong

EngineeringApril 26, 2023

In my previous post on scaling Kong deployments with and without a database , we covered the concepts of deploying Kong with and without a database, as well as using decK, distributed, and hybrid deployments. In this article, we take a tour of some

Ahmed Koshok

New Storage Engine for Kong Hybrid and DB-less Deployments

EngineeringMarch 9, 2022

We understand that our customers need to deploy Kong in a variety of environments and with different deployment mode needs. That is why two years ago, in Kong 1.1, we introduced DB-less mode, the ability to run Kong without the need of connecting to

Datong Sun

Getting Started With Kong Istio Gateway on Kubernetes With Kiali for Observability 

EngineeringOctober 29, 2021

Have you ever found yourself in a situation where all your service mesh services are running in Kubernetes, and now you need to expose them to the outside world securely and reliably? Ingress management is essential for your configuration and ope

Viktor Gamov

Managing Docker Apps With Kubernetes Ingress Controller

EngineeringJuly 6, 2021

Think back to when your development team made the switch to Dockerized containers. What was once an application requiring multiple services on virtual machines transitioned to an application consisting of multiple, tidy Docker containers. While the

Alvin Lee

Moving an Application from VM to Kubernetes

EngineeringMay 19, 2021

Containerization and orchestration are becoming increasingly popular. According to a recent survey conducted by Market Watch, the global container market will exceed $5 billion by 2026. In 2019, that number was under 1 billion. These statistics sh

Michael Heap

Implement a Canary Release with Kong for Kubernetes and Consul

EngineeringNovember 20, 2020

From the Kong API Gateway perspective, using Consul as its Service Discovery infrastructure is one of the most well-known and common integration use cases. With this powerful combination more flexible and advanced routing policies can be implemented

Kong

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026