Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Scaling Kubernetes Deployments of Kong
Engineering
April 26, 2023
4 min read

Scaling Kubernetes Deployments of Kong

Ahmed Koshok
Senior Staff Solutions Engineer, Kong
Topics
KubernetesDeployment
Share on Social

More on this topic

eBooks

Hybrid API Gateway Clusters With Kong Konnect and Amazon Elastic Kubernetes Service

eBooks

The Difference Between API Gateways and Kubernetes Ingress

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo

In my previous post on scaling Kong deployments with and without a database, we covered the concepts of deploying Kong with and without a database, as well as using decK, distributed, and hybrid deployments. In this article, we take a tour of some of the possible Kubernetes deployments of Kong.

Kubernetes (K8s) is the container orchestration war winner. While there are still deployments using other engines, we see K8s far more. K8s developers and operators appreciate how exposing the service of a K8s cluster can get complex, and potentially expensive if using the cloud engines such as GKE, EKS, and AKS.

Kong runs on Kubernetes. It's a straightforward deployment that takes advantage of the usual K8s benefits. It may channel workloads running on K8s or non-K8s.

Kong may also play the role of an Ingress Controller and simplify exposing API workloads on Kubernetes for consumption from distributed clients. Our own Viktor Gamov puts out awesome content and has a good intro on KIC.

Let's examine some of these deployments.

Kong on Kubernetes with database

Any of the configurations in my previous article may be translated to run on Kubernetes. The deployment simply lets Kong run and scale on Kubernetes. We may use a database or not. The database may be in the same cluster or remote. Kong can proxy traffic to Kubernetes Services, as well as services outside of a Kubernetes cluster.

Options for configuring Kong remain available as before. We may use the Admin API, Manager, and decK. We may also deploy a distributed deployment on a cluster.

Naturally, when running on K8s, it would be beneficial to configure Kong in a manner similar to K8s entities. This is where the Kong Ingress Controller is useful. Let's go over how it works.

Kong Ingress Controller — single Kong instance — DB-less

The Kong Kubernetes Ingress Controller allows configurations to the Kong Gateway to be accomplished using Kubernetes Resources.

In this deployment, Kong is running in DB-less mode. The configurations are represented as Kubernetes resources, which when translated, create Kong entities in the proxy. This simplifies configurations on Kubernetes as developers don't need to maintain two repositories. Without an ingress controller, developers will need to keep a repository for Kong Configuration with decK, for example, and another for their K8s applications. Exposing K8s services other than Kong is straightforward since this is done with K8s objects/configurations, such as Ingress. Kong introduces some resources for applying policies and the like.

This therefore is a K8s native way to run Kong. Similar to the deployments we covered in the first article, we can increase the scalability and resilience at will.

Kong Ingress Controller — multiple Kong instances — DB-less

So how can we scale a Kong deployment on K8s? One way to do so is to run multiple instances of Kong as Kubernetes supports horizontal scaling. Each instance is configurable from the controller in its pod. These instances are exposed as a K8s Service, typically of type LoadBlanacer when running on common K8s cloud flavors.

All instances have identical configurations and are updated with new configurations as they arrive via K8s resources.

Kong Ingress Controller — with a database

As in the previous article, if we want to take advantage of features that work with a database, we can introduce control plane and data plane separation. The data plane instance(s) are able to pick up configuration changes from the database and can be scaled horizontally at will. In this diagram we have a single instance, however, we can scale them as needed.

The control plane — also scalable horizontally — is composed of the ingress controller and a Kong control plane instance. When multiple control planes are running, a leader election process takes place to ensure that only one of the controllers is updating the database.

You may notice that we're largely replicating the deployments we saw in the previous article. The key difference is that we let K8s do some of the work for us.

Multiple Kong Ingress Controllers — without a database

So far our Kong deployments consumed all K8s resources in the cluster that are intended for Kong. If it's necessary to isolate a controller on a Kubernetes namespace basis (such that different teams are segregated within a Kubernetes cluster), then running multiple controllers is a possible approach.

In this deployment, each ingress controller is watching a namespace for objects, and once detected, it translates them into configurations into a Kong instance. We have two Ingress Controllers and two gateways.

Multiple Kong Ingress Controllers — with a database

Taking the previous example further again, we introduce control plane and data plane separation. Multiple ingress controllers are monitoring a K8s namespace each. These controllers translate the K8s resources into Kong entities that are persisted in the database by the control plane. Each ingress controller may name a specific namespace on the control plane such that there is a one-to-one correspondence between K8s namespaces and Kong workspaces. The data planes services synchronize the entities from the database as configurations, which are also segregated as Kong workspaces.

In this diagram, the control plane has a single instance, which naturally may be scaled horizontally for increased reliability.

Conclusion

The main ingredients we work with are few. They are the database, Kong, and the controller. Yet, depending on what we want or want to do, we arrange them in a way that fits our needs — be that to scale and handle more load, isolate projects, or have more reliability.

Speaking of reliability, we're now ready to look into how we can put together an architecture that has a good bit of resilience. I'll cover this in my next article on a highly scalable distributed deployment that may span multiple regions.

Continued Learning & Related Content

  • Guide to Understanding Kubernetes Deployments
  • 4 Ways to Deploy Kong Gateway
  • Scaling Kong Deployments with and without Databases
  • Reducing Deployment Risk: Canary Releases and Blue/Green Deployments with Kong
Topics
KubernetesDeployment
Share on Social
Ahmed Koshok
Senior Staff Solutions Engineer, Kong

Recommended posts

Kong Mesh 2.12: SPIFFE/SPIRE Support and Consistent XDS Resource Names

Kong Logo
Product ReleasesSeptember 18, 2025

We're very excited to announce Kong Mesh 2.12 to the world! Kong Mesh 2.12 delivers two very important features: SPIFFE / SPIRE support, which provides enterprise-class workload identity and trust models for your mesh, as well as a consistent Kuma R

Justin Davies

Unlocking API Analytics for Product Managers

Kong Logo
EngineeringSeptember 9, 2025

Meet Emily. She’s an API product manager at ACME, Inc., an ecommerce company that runs on dozens of APIs. One morning, her team lead asks a simple question: “Who’s our top API consumer, and which of your APIs are causing the most issues right now?”

Christian Heidenreich

How to Build a Multi-LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 31, 2025

In the last two parts of this series, we discussed How to Strengthen a ReAct AI Agent with Kong AI Gateway and How to Build a Single-LLM AI Agent with Kong AI Gateway and LangGraph . In this third and final part, we're going to evolve the AI Agen

Claudio Acquaviva

How to Build a Single LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 24, 2025

In my previous post, we discussed how we can implement a basic AI Agent with Kong AI Gateway. In part two of this series, we're going to review LangGraph fundamentals, rewrite the AI Agent and explore how Kong AI Gateway can be used to protect an LLM

Claudio Acquaviva

How to Strengthen a ReAct AI Agent with Kong AI Gateway

Kong Logo
EngineeringJuly 15, 2025

This is part one of a series exploring how Kong AI Gateway can be used in an AI Agent development with LangGraph. The series comprises three parts: Basic ReAct AI Agent with Kong AI Gateway Single LLM ReAct AI Agent with Kong AI Gateway and LangGr

Claudio Acquaviva

Build Your Own Internal RAG Agent with Kong AI Gateway

Kong Logo
EngineeringJuly 9, 2025

What Is RAG, and Why Should You Use It? RAG (Retrieval-Augmented Generation) is not a new concept in AI, and unsurprisingly, when talking to companies, everyone seems to have their own interpretation of how to implement it. So, let’s start with a r

Antoine Jacquemin

AI Gateway Benchmark: Kong AI Gateway, Portkey, and LiteLLM

Kong Logo
EngineeringJuly 7, 2025

In February 2024, Kong became the first API platform to launch a dedicated AI gateway, designed to bring production-grade performance, observability, and policy enforcement to GenAI workloads. At its core, Kong’s AI Gateway provides a universal API

Claudio Acquaviva

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance•
  • © Kong Inc. 2025