Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Scaling Kubernetes Deployments of Kong
Engineering
April 26, 2023
4 min read

Scaling Kubernetes Deployments of Kong

Ahmed Koshok
Senior Staff Solutions Engineer, Kong

In my previous post on scaling Kong deployments with and without a database, we covered the concepts of deploying Kong with and without a database, as well as using decK, distributed, and hybrid deployments. In this article, we take a tour of some of the possible Kubernetes deployments of Kong.

Kubernetes (K8s) is the container orchestration war winner. While there are still deployments using other engines, we see K8s far more. K8s developers and operators appreciate how exposing the service of a K8s cluster can get complex, and potentially expensive if using the cloud engines such as GKE, EKS, and AKS.

Kong runs on Kubernetes. It's a straightforward deployment that takes advantage of the usual K8s benefits. It may channel workloads running on K8s or non-K8s.

Kong may also play the role of an Ingress Controller and simplify exposing API workloads on Kubernetes for consumption from distributed clients. Our own Viktor Gamov puts out awesome content and has a good intro on KIC.

Let's examine some of these deployments.

Kong on Kubernetes with database

Any of the configurations in my previous article may be translated to run on Kubernetes. The deployment simply lets Kong run and scale on Kubernetes. We may use a database or not. The database may be in the same cluster or remote. Kong can proxy traffic to Kubernetes Services, as well as services outside of a Kubernetes cluster.

Options for configuring Kong remain available as before. We may use the Admin API, Manager, and decK. We may also deploy a distributed deployment on a cluster.

Naturally, when running on K8s, it would be beneficial to configure Kong in a manner similar to K8s entities. This is where the Kong Ingress Controller is useful. Let's go over how it works.

Kong Ingress Controller — single Kong instance — DB-less

The Kong Kubernetes Ingress Controller allows configurations to the Kong Gateway to be accomplished using Kubernetes Resources.

In this deployment, Kong is running in DB-less mode. The configurations are represented as Kubernetes resources, which when translated, create Kong entities in the proxy. This simplifies configurations on Kubernetes as developers don't need to maintain two repositories. Without an ingress controller, developers will need to keep a repository for Kong Configuration with decK, for example, and another for their K8s applications. Exposing K8s services other than Kong is straightforward since this is done with K8s objects/configurations, such as Ingress. Kong introduces some resources for applying policies and the like.

This therefore is a K8s native way to run Kong. Similar to the deployments we covered in the first article, we can increase the scalability and resilience at will.

Kong Ingress Controller — multiple Kong instances — DB-less

So how can we scale a Kong deployment on K8s? One way to do so is to run multiple instances of Kong as Kubernetes supports horizontal scaling. Each instance is configurable from the controller in its pod. These instances are exposed as a K8s Service, typically of type LoadBlanacer when running on common K8s cloud flavors.

All instances have identical configurations and are updated with new configurations as they arrive via K8s resources.

Kong Ingress Controller — with a database

As in the previous article, if we want to take advantage of features that work with a database, we can introduce control plane and data plane separation. The data plane instance(s) are able to pick up configuration changes from the database and can be scaled horizontally at will. In this diagram we have a single instance, however, we can scale them as needed.

The control plane — also scalable horizontally — is composed of the ingress controller and a Kong control plane instance. When multiple control planes are running, a leader election process takes place to ensure that only one of the controllers is updating the database.

You may notice that we're largely replicating the deployments we saw in the previous article. The key difference is that we let K8s do some of the work for us.

Multiple Kong Ingress Controllers — without a database

So far our Kong deployments consumed all K8s resources in the cluster that are intended for Kong. If it's necessary to isolate a controller on a Kubernetes namespace basis (such that different teams are segregated within a Kubernetes cluster), then running multiple controllers is a possible approach.

In this deployment, each ingress controller is watching a namespace for objects, and once detected, it translates them into configurations into a Kong instance. We have two Ingress Controllers and two gateways.

Multiple Kong Ingress Controllers — with a database

Taking the previous example further again, we introduce control plane and data plane separation. Multiple ingress controllers are monitoring a K8s namespace each. These controllers translate the K8s resources into Kong entities that are persisted in the database by the control plane. Each ingress controller may name a specific namespace on the control plane such that there is a one-to-one correspondence between K8s namespaces and Kong workspaces. The data planes services synchronize the entities from the database as configurations, which are also segregated as Kong workspaces.

In this diagram, the control plane has a single instance, which naturally may be scaled horizontally for increased reliability.

Conclusion

The main ingredients we work with are few. They are the database, Kong, and the controller. Yet, depending on what we want or want to do, we arrange them in a way that fits our needs — be that to scale and handle more load, isolate projects, or have more reliability.

Speaking of reliability, we're now ready to look into how we can put together an architecture that has a good bit of resilience. I'll cover this in my next article on a highly scalable distributed deployment that may span multiple regions.

Continued Learning & Related Content

  • Guide to Understanding Kubernetes Deployments
  • 4 Ways to Deploy Kong Gateway
  • Scaling Kong Deployments with and without Databases
  • Reducing Deployment Risk: Canary Releases and Blue/Green Deployments with Kong
KubernetesDeployment

More on this topic

Videos

Centralized Decentralization: Migration from Azure to Kong

Videos

Kong Builders- August 17 - Multi-environment deployment

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
KubernetesDeployment
Share on Social
Ahmed Koshok
Senior Staff Solutions Engineer, Kong

Recommended posts

Kubernetes Canary Deployment in 5 Minutes

Kong Logo
EngineeringDecember 17, 2019

Welcome to our second hands-on Kuma guide! The first one walked you through securing your application with mTLS using Kuma. Today, this guide will walk you through Kuma's new L4 traffic routing rules. These rules will allow you to easily impleme

Kevin Chen

Farewell Ingress NGINX: Explore a Better Path Forward with Kong

Kong Logo
EngineeringNovember 14, 2025

"To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee are announcing the upcoming retirement of Ingress NGINX . Best-effort maintenance will continue until March 2026. Afterward, there w

Justin Davies

Guide to Understanding Kubernetes Deployments

Kong Logo
Learning CenterMarch 20, 2024

Rolling out new versions of your apps on Kubernetes can be tricky, but knowing the different deployment options is important for keeping your services running smoothly with little to no downtime. This rabbithole of Kubernetes deployment methods may

Peter Barnard

Insights into Kubernetes Deployments with Kong Ingress Controller

Kong Logo
EngineeringFebruary 11, 2025

This blog addresses the common challenges organizations face with fragmented API management in Kubernetes environments and presents Kong Konnect combined with the Kong Ingress Controller (KIC) as a comprehensive solution.  We'll highlight the issues

Declan Keane

How We Built It: Managing Konnect Entities from K8s Clusters with KGO

Kong Logo
EngineeringDecember 18, 2024

We recently released Kong Gateway Operator 1.4 with support for managing Konnect entities from within the Kubernetes clusters. This means users can now manage their Konnect configurations declaratively, through Kubernetes resources powered by Kong

Patryk Małek

Using Service Mesh Within Your Kubernetes Environment

Kong Logo
EngineeringAugust 22, 2024

Container technologies are always evolving — and we're not talking Tupperware here. Over the past years, service mesh has emerged as a crucial component for managing complex, distributed systems. As organizations increasingly adopt Kubernetes fo

Kong

A Guide to Service Mesh Adoption and Implementation

Kong Logo
EngineeringAugust 10, 2024

In the rapidly evolving world of microservices and cloud-native applications , service mesh has emerged as a critical tool for managing complex, distributed systems. As organizations increasingly adopt microservices architectures, they face new c

Kong

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026