• Explore the unified API Platform
        • BUILD APIs
        • Kong Insomnia
        • API Design
        • API Mocking
        • API Testing & Debugging
        • MCP Client
        • RUN APIs
        • API Gateway
        • Context Mesh
        • AI Gateway
        • Event Gateway
        • Kubernetes Operator
        • Service Mesh
        • Ingress Controller
        • Runtime Management
        • DISCOVER APIs
        • Developer Portal
        • Service Catalog
        • MCP Registry
        • GOVERN APIs
        • Metering & Billing
        • APIOps & Automation
        • API Observability
        • Why Kong?
      • CLOUD
      • Cloud API Gateways
      • Need a self-hosted or hybrid option?
      • COMPARE
      • Considering AI Gateway alternatives?
      • Kong vs. Postman
      • Kong vs. MuleSoft
      • Kong vs. Apigee
      • Kong vs. IBM
      • GET STARTED
      • Sign Up for Kong Konnect
      • Documentation
  • Agents
      • FOR PLATFORM TEAMS
      • Developer Platform
      • Kubernetes & Microservices
      • Observability
      • Service Mesh Connectivity
      • Kafka Event Streaming
      • FOR EXECUTIVES
      • AI Connectivity
      • Open Banking
      • Legacy Migration
      • Platform Cost Reduction
      • Kafka Cost Optimization
      • API Monetization
      • AI Monetization
      • AI FinOps
      • FOR AI TEAMS
      • AI Cost Control
      • AI Governance
      • AI Integration
      • AI Security
      • Agentic Infrastructure
      • MCP Production
      • MCP Traffic Gateway
      • FOR DEVELOPERS
      • Mobile App API Development
      • GenAI App Development
      • API Gateway for Istio
      • Decentralized Load Balancing
      • BY INDUSTRY
      • Financial Services
      • Healthcare
      • Higher Education
      • Insurance
      • Manufacturing
      • Retail
      • Software & Technology
      • Transportation
      • See all Solutions
      • DOCUMENTATION
      • Kong Konnect
      • Kong Gateway
      • Kong Mesh
      • Kong AI Gateway
      • Kong Insomnia
      • Plugin Hub
      • EXPLORE
      • Blog
      • Learning Center
      • eBooks
      • Reports
      • Demos
      • Customer Stories
      • Videos
      • EVENTS
      • AI + API Summit
      • Webinars
      • User Calls
      • Workshops
      • Meetups
      • See All Events
      • FOR DEVELOPERS
      • Get Started
      • Community
      • Certification
      • Training
      • COMPANY
      • About Us
      • Why Kong?
      • We're Hiring!
      • Press Room
      • Investors
      • Contact Us
      • PARTNER
      • Kong Partner Program
      • SECURITY
      • Trust and Compliance
      • SUPPORT
      • Enterprise Support Portal
      • Professional Services
      • Documentation
      • Press Releases

        Kong Names Bruce Felt as Chief Financial Officer

        Read More
  • Pricing
  • Login
  • Get a Demo
  • Start for Free
Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Implement a Canary Release with Kong for Kubernetes and Consul
Engineering
November 20, 2020
3 min read

Implement a Canary Release with Kong for Kubernetes and Consul

Kong

From the Kong API Gateway perspective, using Consul as its Service Discovery infrastructure is one of the most well-known and common integration use cases. With this powerful combination more flexible and advanced routing policies can be implemented to address Canary Releases, A/B testings, Blue-Green deployments, etc. totally abstracted from the Gateway standpoint without having to deal with lookup procedures.

This article focuses on integrating Kong for Kubernetes (K4K8S), the Kong Ingress Controller based on the Kong API Gateway, and Consul Service Discovery running on a Kubernetes EKS Cluster. Kong for Kubernetes can implement all sorts of policies to protect the Ingresses defined to expose Kubernetes services to external Consumers including Rate Limiting, API Keys, OAuth/OIDC grants, etc.

The following diagram describes the Kong for Kubernetes Ingress Controller and Consul Service Discovery implementing a Canary Release:

Consul and Kong for Kubernetes Installation Process

This section assumes you have a Kubernetes Cluster with both Consul and Kong for Kubernetes installed. This HashiCorp link can help you spin up a Consul Kubernetes deployment. Similarly, Kong provides the following link to install Kong for Kubernetes.

Consul Configuration Process

After getting your Kubernetes Cluster installed with Consul and Kong for Kubernetes deployed, we’re ready to start the 5-step configuration process:

  1. Configure Kubernetes DNS Service
  2. Deploy both Current and Canary application releases
  3. Register a Consul Service based on both application releases
  4. Create an External Kubernetes Service based on the Consul Service
  5. Register a Kong for Kubernetes Ingress for the External Service

Configure Kubernetes DNS

First of all, let’s configure the Kubernetes in order to consume Consul’s primary query instance based on DNS. The configuration depends on the DNS provided by the Kubernetes engine you are using. Please, refer to this link to check how to configure KubeDNS or CoreDNS.

Once configured, DNS requests in the form <consul-service-name>.service.consul will resolve for Consul Services. As an example, here are the configuration steps for CoreDNS:

Get the Consul DNS’ Cluster IP:

kubectl get service consul-consul-dns -n hashicorp -o jsonpath='{.spec.clusterIP}'
10.105.175.26

Edit the CoreDNS ConfigMap to include a forward definition that points to the Consul DNS’s Kubernetes Services.

kubectl edit configmap coredns -n kube-system
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
    consul {
        errors
        cache 30
        forward . 10.105.175.26
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2020-06-19T13:42:16Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:Corefile: {}
    manager: kubeadm
    operation: Update
    time: "2020-06-19T13:42:16Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "178"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: 698c5d0c-998e-4aa4-9857-67958eeee25a

Deploy both Current and Canary application releases

For the purpose of this article we’re going to create our Kubernetes Deployments using basic Docker Images for both Current and Canary releases available in http://hub.docker.com. Both Images return the current datetime, differing from each other by the text used. As expected, after the deployment, you should see two Kubernetes Services: benigno-v1 and benigno-v2.

The Current application release can be deployed using the following declaration:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: benigno-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: benigno
      version: v1
  template:
    metadata:
      labels:
        app: benigno
        version: v1
    spec:
      containers:
      - name: benigno
        image: claudioacquaviva/benigno
        ports:
        - containerPort: 5000
----
apiVersion: v1
kind: Service
metadata:
  name: benigno-v1
  labels:
    app: benigno-v1
spec:
  type: ClusterIP
  ports:
  - port: 5000
    name: http
  selector:
    app: benigno
    version: v1
EOF

The Canary Release is deployed using the command below:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: benigno-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: benigno
      version: v2
  template:
    metadata:
      labels:
        app: benigno
        version: v2
    spec:
      containers:
      - name: benigno
        image: claudioacquaviva/benigno_rc
        ports:
        - containerPort: 5000
----
apiVersion: v1
kind: Service
metadata:
  name: benigno-v2
  labels:
    app: benigno-v2
spec:
  type: ClusterIP
  ports:
  - port: 5000
    name: http
  selector:
    app: benigno
    version: v2
EOF

Register a Consul Service based on both application releases

Now, we have to register a Consul Service based on both Kubernetes Services we have deployed. The benigno1 Consul Service will have both Kubernetes Services’ Cluster IPs configured with different weights. So, any DNS request to it will return one of the IPs applying the weights defined.

In order to get the Kubernetes Services’ Cluster IPs run:

kubectl get service --all-namespaces
NAMESPACE   NAME           TYPE        CLUSTER-IP      EXTERNAL-IP  PORT(S      AGE
default     benigno-v1     ClusterIP   10.100.225.125               5000/TCP    116s
default     benigno-v2     ClusterIP   10.100.148.236               5000/TCP    12s
…

Then create two files as described below using the Cluster IPs. Notice the weights used saying that the Consul DNS will return the Canary Release IP address for only 20% of the requests:

ben0.json:

{
  "ID": "ben0",
  "Name": "benigno1",
  "Tags": ["primary"],
  "Address": "10.100.225.125",
  "Port": 5000,
  "weights": {
    "passing": 80,
    "warning": 1
  }
  "proxy": {
    "local_service_port": 5000
  }
}

ben1.json:

{
  "ID": "ben1",
  "Name": "benigno1",
  "Tags": ["secondary"],
  "Address": "10.100.148.236",
  "Port": 5000,
  "weights": {
    "passing": 20,
    "warning": 1
  }
  "proxy": {
    "local_service_port": 5000
  }
}

Expose Consul using port-forward so we can send requests to it and get the Consul Service registered. On one local terminal run:

kubectl port-forward service/consul-connect-consul-server -n hashicorp 8500:8500

Open another local terminal to send the requests using the files created before. We’re using HTTPie to send the requests. Feel free to use any other tool.

http put :8500/v1/agent/service/register < ben0.json
http put :8500/v1/agent/service/register < ben1.json

Create an External Kubernetes Service based on the Consul Service

After registering the Consul Service, any DNS request to benigno1.service.consul will return one of the IPs applying the weight policy described. Now, we create an External Service to define a specific Kubernetes reference to the Consul Service.

cat <<EOF | kubectl apply -f -
kind: Service
apiVersion: v1
metadata:
  name: benigno1
spec:
  ports:
  - protocol: TCP
    port: 5000
  type: ExternalName
  externalName: benigno1.service.consul
EOF

Register a Kong for Kubernetes Ingress for the External Service

Finally we’re going to expose the Canary Release through an Ingress managed by Kong for Kubernetes. Using the External Service created before we abstract both Application releases under the Consul Service benigno1.service.consul name.

cat <<EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: benignoroute
  annotations:
    kubernetes.io/ingress.class: kong
    konghq.com/strip-path: "true"
spec:
  rules:
  - http:
      paths:
        - path: /benignoroute
          backend:
            serviceName: benigno1
            servicePort: 5000
EOF

You can test the Ingress sending a request like this:

$ http <K4K8S-EXTERNALIP>:<K4K8S-PORT>/benignoroute
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 36
Content-Type: text/html; charset=utf-8
Date: Wed, 16 Sep 2020 20:37:22 GMT
Server: Werkzeug/1.0.1 Python/3.8.3
Via: kong/2.1.3
X-Kong-Proxy-Latency: 4
X-Kong-Upstream-Latency: 2

Hello World, Benigno, Canary Release

Start a loop to see the Canary Release in action:

while [ 1 ]; do curl http://<K4K8S-EXTERNALIP>:<K4K8S-PORT>/benignoroute; sleep 1; echo; done

Kong for Kubernetes provides CRDs not just to define Ingresses but also apply typical policies defined at the Ingress Controller layer. Feel free to experiment further policy implementations like caching, log processing, OIDC-based authentication, GraphQL integration and more with the extensive list of plugins provided by Kong.

KubernetesAPI Development

More on this topic

Videos

Local Kubernetes Development with Kong, minikube, and kind

Videos

Centralized Decentralization: Migration from Azure to Kong

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
KubernetesAPI Development
Kong

Recommended posts

Getting Started With Kong Istio Gateway on Kubernetes With Kiali for Observability 

EngineeringOctober 29, 2021

Have you ever found yourself in a situation where all your service mesh services are running in Kubernetes, and now you need to expose them to the outside world securely and reliably? Ingress management is essential for your configuration and ope

Viktor Gamov

Managing Docker Apps With Kubernetes Ingress Controller

EngineeringJuly 6, 2021

Think back to when your development team made the switch to Dockerized containers. What was once an application requiring multiple services on virtual machines transitioned to an application consisting of multiple, tidy Docker containers. While the

Alvin Lee

Moving an Application from VM to Kubernetes

EngineeringMay 19, 2021

Containerization and orchestration are becoming increasingly popular. According to a recent survey conducted by Market Watch, the global container market will exceed $5 billion by 2026. In 2019, that number was under 1 billion. These statistics sh

Michael Heap

How to Use Kong Gateway With K3s For IoT and Edge Computing on Kubernetes

EngineeringJuly 1, 2020

Once upon a time, we had these giant structures where thousands of people would congregate to share ideas, pamphlets filled to the margins with buzz words and cheap, branded t-shirts. Yep, tech conferences - oh what a relic of the past that I miss.

Kevin Chen

Kubernetes Canary Deployment in 5 Minutes

EngineeringDecember 17, 2019

Welcome to our second hands-on Kuma guide! The first one walked you through securing your application with mTLS using Kuma. Today, this guide will walk you through Kuma's new L4 traffic routing rules. These rules will allow you to easily impleme

Kevin Chen

Try Kong on Kubernetes with Google Cloud Platform

Kong Logo
EngineeringOctober 10, 2018

The best way to learn a new technology is often to try it. Even if you prefer reading docs, hands-on experimentation is an ideal accompaniment to written instructions. Today I’m happy to announce the fastest and easiest way to try Kong on Kubernetes

Kong

Farewell Ingress NGINX: Explore a Better Path Forward with Kong

EngineeringNovember 14, 2025

"To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee are announcing the upcoming retirement of Ingress NGINX . Best-effort maintenance will continue until March 2026. Afterward, there w

Justin Davies

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026