Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Scaling Kong Deployments with and without Databases
Engineering
April 10, 2023
5 min read

Scaling Kong Deployments with and without Databases

Ahmed Koshok
Senior Staff Solutions Engineer, Kong
Topics
API GatewayDeployment
Share on Social

More on this topic

eBooks

API Infrastructure: ESB versus API Gateway

eBooks

5 Questions To Ask Your API Gateway Vendor

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo

As the world's most popular API Gateway, Kong Gateway is flexible and can be adapted to various environments and deployment configurations. This flexibility means some time should be taken to make good architectural decisions for the use cases in mind.

For example, the best configuration for a developer's workstation isn't the ideal one for a high SLA deployment. Fitness for purpose is therefore the objective.

Considering requirements for your Kong configuration

When determining which deployment topology to use, it's prudent to consider the requirements that need to be met. This may cover things such as:

  • Configuration management
  • Reliability
  • Security
  • Latency
  • Scalability
  • Complexity

Generally speaking, the best deployment architecture is the one that meets its intended purpose with the minimum complexity possible. In this article, we explore some deployment examples and cover their suitability for various use cases.

Single instance — DB-less (aka declarative configuration)

A single instance, DB-less (aka declarative configuration) deployment is arguably the simplest deployment possible of Kong.

DB-less mode is good for local testing, as Kong can be quickly configured from a single declarative configuration file. The entities in the file (routes, services, plugins, clients, etc.) are loaded by Kong into memory, and no database is required.

The source of truth is the declarative configuration file, making it in effect the ‘control plane'. Kong accepts YAML or JSON declarative files when running in DB-less mode.

There are some limitations in this configuration. Plugins that require a database are naturally not compatible. Secondly, the admin API is for largely read-only purposes. The /config endpoint is the exception that may be used to reload a configuration file with a new state.

This configuration can work for testing purposes, perhaps in a pipeline. It may also work in a development environment where the configuration is driven declaratively.

It can be used for production purposes. We'll forgo talking about reliability and scalability details for now.

Single instance with database — traditional deployment

In a single instance with database — traditional deployment, Kong uses a database to maintain state. The database stores information about teams and permissions for role-based access control. It makes it possible to use Kong Manager UI, Vitals, and the Dev Portal — if using Kong Enterprise.

Plugins that require a database are also available for use. The single Kong instance is both the control and data plane. The Kong admin API is usable to alter Kong's state by creating entities (services, routes, plugins, etc.) as necessary. The Kong Manager UI may also be used for this purpose.

This is a classic, or embedded deployment, illustrated in this diagram. Readers may try this at will and for various operating systems.

This is the deployment we usually run for doing demonstrations and development. It can be adapted for other scenarios. For example, we can only expose the portal if we wish to have the deployment serve API documentation only. Or we can only expose the Manager UI for visibility, or only the proxy port for traffic processing, and so on.

Single instance with database and declarative configuration

Next, let's talk about a single instance with database and declarative configuration.

It may be argued that by enabling the Kong Admin API (as in the previous deployment) that it's an imperative configuration. While this is true, Kong may still be configured declaratively, using decK, illustrated in this diagram.

decK, or declarative Kong, is a tool that allows configuring Kong in a declarative manner. We still have access to all the features we get when using the database, and access to the Admin API.

Given that decK is the "source of truth" of the gateway configuration, making changes via the Manager UI or Admin API is discouraged for production purposes. This is because mixing decK with Admin API could lead to states that are unpredictable or not easily replicable. Therefore, the "control plane" in this deployment is decK.

Kong Manager allows visibility on the configurations and metrics. It may alter administrative permissions, but it shouldn't alter the state of the proxy configurations unless there's a good reason to do so. (For example, an emergency update to a plugin, or configuration change while the pipeline is broken.)

This is the configuration I run most of the time on my laptop. I have access to the Kong Manager UI, all plugins and features are enabled, and I can configure Kong declaratively as well as imperatively if I must.

Multiple Instances with database — distributed deployment

The previous deployments showed a single instance of Kong. These are useful for development, testing, and demonstration purposes, and in some rare occasions in a production environment. In most production deployments there are usually requirements for fault tolerance, scalability on demand, rolling upgrades, and so on. As the demands for SLAs increase, redundancy must increase as well. As failures are inevitable, some failure planning helps mitigate risk.

We now look at deployment models that are suitable for multiple instances of Kong in increasing resilience and ability to handle load.

In the diagram above, there are three Kong instances. One of the instances, on the right, will enable/expose the Kong Admin API and/or UI, and the other two will not. In this way, one instance is the "control plane" and the other two instances are "data planes." The CP will store configurations into the database, which are then picked up by the data planes at a configurable interval as events.

Visualized another way:

Much like in our previous deployment, using decK is possible when a declarative configuration approach is needed. We may introduce new instances of the data planes to handle increased load and offer higher resilience.

This distributed deployment inherently adds resilience through horizontal scalability. It may be further hardened through redundancy at every level, DB, network, redundancy at the control plane, etc. — and naturally to multiple regions, if necessary.

Multiple instances with a database — hybrid deployment

In situations where data planes may not reach a database — perhaps due to security policy — the Kong control plane can propagate the configuration to the data planes via a persistent connection. It further collects telemetry data for tracking traffic and health metrics of the data planes.

Should the control plane be unavailable or the connection to it interrupted, the data planes will continue to operate with cached configurations in memory. Once the connection is reestablished, any new configurations are updated. Naturally, this change introduces a few limitations. It otherwise is an alternative to the classic distributed deployment.

Summary

We had a look at five deployment architectures in this article. We started with the simplest declarative deployment, then covered using a database, using decK, and running distributed, and finally hybrid deployments. This illustrates how Kong is flexible to meet various scenarios for deployments from a developer workstation, to production, and everything in between.

These deployments are not exhaustive; Kong is flexible well beyond this. The purpose is to illustrate possibilities and concepts.

In my next post, we'll look at how we can run Kong on Kubernetes — with and without an Ingress Controller. Some interesting deployment architectures will follow. Until then, wishing you fun with deployments.

Continued Learning & Related Content

  • Guide to Understanding Kubernetes Deployments
  • 4 Ways to Deploy Kong Gateway
  • Scaling Kubernetes Deployments of Kong
  • Reducing Deployment Risk: Canary Releases and Blue/Green Deployments with Kong
Topics
API GatewayDeployment
Share on Social
Ahmed Koshok
Senior Staff Solutions Engineer, Kong

Recommended posts

Unlocking API Analytics for Product Managers

Kong Logo
EngineeringSeptember 9, 2025

Meet Emily. She’s an API product manager at ACME, Inc., an ecommerce company that runs on dozens of APIs. One morning, her team lead asks a simple question: “Who’s our top API consumer, and which of your APIs are causing the most issues right now?”

Christian Heidenreich

How to Build a Multi-LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 31, 2025

In the last two parts of this series, we discussed How to Strengthen a ReAct AI Agent with Kong AI Gateway and How to Build a Single-LLM AI Agent with Kong AI Gateway and LangGraph . In this third and final part, we're going to evolve the AI Agen

Claudio Acquaviva

How to Build a Single LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 24, 2025

In my previous post, we discussed how we can implement a basic AI Agent with Kong AI Gateway. In part two of this series, we're going to review LangGraph fundamentals, rewrite the AI Agent and explore how Kong AI Gateway can be used to protect an LLM

Claudio Acquaviva

How to Strengthen a ReAct AI Agent with Kong AI Gateway

Kong Logo
EngineeringJuly 15, 2025

This is part one of a series exploring how Kong AI Gateway can be used in an AI Agent development with LangGraph. The series comprises three parts: Basic ReAct AI Agent with Kong AI Gateway Single LLM ReAct AI Agent with Kong AI Gateway and LangGr

Claudio Acquaviva

Build Your Own Internal RAG Agent with Kong AI Gateway

Kong Logo
EngineeringJuly 9, 2025

What Is RAG, and Why Should You Use It? RAG (Retrieval-Augmented Generation) is not a new concept in AI, and unsurprisingly, when talking to companies, everyone seems to have their own interpretation of how to implement it. So, let’s start with a r

Antoine Jacquemin

AI Gateway Benchmark: Kong AI Gateway, Portkey, and LiteLLM

Kong Logo
EngineeringJuly 7, 2025

In February 2024, Kong became the first API platform to launch a dedicated AI gateway, designed to bring production-grade performance, observability, and policy enforcement to GenAI workloads. At its core, Kong’s AI Gateway provides a universal API

Claudio Acquaviva

Scalable Architectures with Vue Micro Frontends: A Developer-Centric Approach

Kong Logo
EngineeringJanuary 9, 2024

In this article, which is based on my talk at VueConf Toronto 2023, we'll explore how to harness the power of Vue.js and micro frontends to create scalable, modular architectures that prioritize the developer experience. We'll unveil practical strate

Adam DeHaven

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance•
  • © Kong Inc. 2025