Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Deploying With Confidence Using Kong Gateway and Spinnaker
Engineering
April 29, 2021
5 min read

Deploying With Confidence Using Kong Gateway and Spinnaker

Ashwin Sadeep
Topics
API GatewayKong GatewayDeployment
Share on Social

More on this topic

eBooks

API Infrastructure: ESB versus API Gateway

eBooks

5 Questions To Ask Your API Gateway Vendor

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo

Change is the primary cause of service reliability issues for agile engineering teams. In this post, I’ll cover how you can limit the impact of a buggy change, making it past your quality gates with Kong Gateway and Spinnaker for canary deployment.

"What is the primary cause of service reliability issues that we see in Azure, other than small but common hardware failures? Change."

~Mark Russinovich, CTO of Azure, on Advancing safe deployment practices.

Canary Deployment

Canary deployment is a technique by which a new version of your software is only visible to a small subset of users—a canary cohort. It deploys the latest version of the software alongside the current stable version. It also splits traffic across these deployments. During a deployment, the key performance indicators—a combination of business and engineering metrics—are monitored across the system on both the canary and stable deployments.

Proceed once you’ve aligned on the metrics, and the deployment is safe. If you see any anomalies between canary and stable, then roll back the canary traffic so that the service can recover.

Kong Gateway at Razorpay

At Razorpay, we use Kong as our API gateway. We leverage its upstream construct to split traffic across the canary and stable deployments of upstream services. While you can compare the metrics across canary and stable deployment, you should have a third deployment baseline with the current stable code deployed.

When we deploy the v2 version of an upstream service, the pipeline will deploy the v2 version to canary and the v1 version to baseline. The short-lived baseline deployment ensures that your metrics will be free of any effects caused by a long-running process - for example, a memory leak that skews memory usage metrics.

We configure each service in the system with the corresponding service and upstream entities in Kong Gateway. Each upstream entity will, in turn, have three targets—canary, baseline and stable. That’s where traffic will ultimately get routed. Since we deploy our applications on Kubernetes, the targets are usually the corresponding service endpoints responsible for load balancing within the deployment.

Kong Gateway and Spinnaker Canary Deployment

Overview of the deployment + API gateway architecture

Automated Canary Analysis With Spinnaker

We use Netflix’s Spinnaker—an open source continuous delivery platform—along with Kayenta for automated canary analysis for managing the deployment pipelines. The Spinnaker pipeline, along with the Kayenta configuration, defines application deployment and what to check during canary analysis.

For example, we can configure it to deploy the canary stage. Then, the Kayenta configuration can define the analysis of error-rate and latency metrics. If the analysis fails, we can configure the pipeline to terminate traffic to canary targets and raise an alert.

With the Kong Admin API, terminating traffic to canary is as simple as a DELETE request to remove the corresponding target. Once you DELETE a target, traffic will stop getting routed there. Instead, traffic will be distributed among the other available targets—baseline and stable in our case.

Kayenta dashboard of a passing canary stage

Progressive Deployments Using Kong Gateway

In practice, deployment pipelines tend to be a bit more nuanced. For instance, we use Kong’s Admin API extensively to do progressive deployments—traffic on canary will start at 2.5%. It will gradually ramp up to 10% over time while concurrently performing canary analysis. With Kong’s Admin API, we can identify functional issues and performance regressions.

Canary release plugin

Kong Konnect also comes with a canary release plugin that can help you do progressive rollouts without depending on a separate CD platform. This plugin supports two operation modes:

  • Static: You can specify the canary split in percentage, and Kong will route the traffic based on this percentage. You can do this by setting conf.percentage.
  • Progressive: You can specify a duration during which traffic will be gradually ramped up on the new deployment. You can do this by setting conf.duration.

The plugin can also schedule a canary release at a specific time and control traffic split based on the ACL groups. For example, you might want to ensure that a particular customer’s traffic never goes to the canary deployment. You can do this by configuring conf.groups and setting conf.hash as deny.

Granular Control Over Traffic Split

Our choice of using Kong as the API gateway has paid off here. As mentioned previously, Kong Gateway allows us to specify traffic split to a high degree of precision. More importantly, it also handles the reload of the underlying OpenResty workers behind the scenes. It acts as a REST API call from our Spinnaker pipeline to re-route traffic between stable and canary deployments.

Intelligent Load Balancing

Kong Gateway also gives us flexibility with routing based on the authenticated user since we handle authentication at the API gateway itself. This allows us to specify our routing rules such that the traffic from a particular user will always go to a deterministic target. It uses consistent-hashing to distribute your traffic across upstreams based on the hashing field supplied in config. As a result, this comes with the usual caveats, and you need to ensure that the choice of hash key should have sufficient cardinality to avoid hotspots.

We can configure the upstream to hash_on different attributes of the request - headers, cookie, IP or consumer. In the context of Kong Gateway, a consumer has a one-to-one mapping to users. As long as we have an authentication plugin configured, Kong Gateway will resolve the consumer for every request based on the Authorization header and route traffic based on the authenticated user.

Kong Gateway's Prometheus Plugin for Additional Instrumentation

Kong Gateway comes bundled with a robust Prometheus plugin implementation which exposes metrics on status codes and latency histograms at a service and route level. This allows us to configure our canary analysis stage to look at P99 latencies and the rate of HTTP 5XX at the gateway level.

Since the plugin exposes metrics labeled by service and route, we can have more granular canary stages to evaluate metrics for specific routes over and above the analysis at a service level.

This request will configure the Prometheus plugin globally, which will expose latency and throughput metrics for both Kong and proxied upstreams. You can label the metrics with the service and route names. And you can use the official Grafana dashboard to visualise them.

A sample of latency metrics available at a per route granularity

If you have any additional questions, post them on Kong Nation.

To stay in touch, join the Kong Community.

Once you've successfully set up Kong Gateway and Spinnaker, you may find these other tutorials helpful:

  • How to Use the Kong Gateway JWT Plugin for Service Authentication
  • 4 Steps to Authorizing Services With the Kong Gateway OAuth2 Plugin
  • Getting Started With Kuma Service Mesh

Topics
API GatewayKong GatewayDeployment
Share on Social
Ashwin Sadeep

Recommended posts

Unlocking API Analytics for Product Managers

Kong Logo
EngineeringSeptember 9, 2025

Meet Emily. She’s an API product manager at ACME, Inc., an ecommerce company that runs on dozens of APIs. One morning, her team lead asks a simple question: “Who’s our top API consumer, and which of your APIs are causing the most issues right now?”

Christian Heidenreich

Announcing terraform-provider-konnect v3

Kong Logo
Product ReleasesAugust 22, 2025

It’s been almost a year since we released our  Konnect Terraform provider . In that time we’ve seen over 300,000 installs, have 1.7 times as many resources available, and have expanded the provider to include data sources to enable federated managem

Michael Heap

How to Build a Multi-LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 31, 2025

In the last two parts of this series, we discussed How to Strengthen a ReAct AI Agent with Kong AI Gateway and How to Build a Single-LLM AI Agent with Kong AI Gateway and LangGraph . In this third and final part, we're going to evolve the AI Agen

Claudio Acquaviva

How to Build a Single LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 24, 2025

In my previous post, we discussed how we can implement a basic AI Agent with Kong AI Gateway. In part two of this series, we're going to review LangGraph fundamentals, rewrite the AI Agent and explore how Kong AI Gateway can be used to protect an LLM

Claudio Acquaviva

How to Strengthen a ReAct AI Agent with Kong AI Gateway

Kong Logo
EngineeringJuly 15, 2025

This is part one of a series exploring how Kong AI Gateway can be used in an AI Agent development with LangGraph. The series comprises three parts: Basic ReAct AI Agent with Kong AI Gateway Single LLM ReAct AI Agent with Kong AI Gateway and LangGr

Claudio Acquaviva

Build Your Own Internal RAG Agent with Kong AI Gateway

Kong Logo
EngineeringJuly 9, 2025

What Is RAG, and Why Should You Use It? RAG (Retrieval-Augmented Generation) is not a new concept in AI, and unsurprisingly, when talking to companies, everyone seems to have their own interpretation of how to implement it. So, let’s start with a r

Antoine Jacquemin

AI Gateway Benchmark: Kong AI Gateway, Portkey, and LiteLLM

Kong Logo
EngineeringJuly 7, 2025

In February 2024, Kong became the first API platform to launch a dedicated AI gateway, designed to bring production-grade performance, observability, and policy enforcement to GenAI workloads. At its core, Kong’s AI Gateway provides a universal API

Claudio Acquaviva

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance•
  • © Kong Inc. 2025