• Explore the unified API Platform
        • BUILD APIs
        • Kong Insomnia
        • API Design
        • API Mocking
        • API Testing & Debugging
        • MCP Client
        • RUN APIs
        • API Gateway
        • Context Mesh
        • AI Gateway
        • Event Gateway
        • Kubernetes Operator
        • Service Mesh
        • Ingress Controller
        • Runtime Management
        • DISCOVER APIs
        • Developer Portal
        • Service Catalog
        • MCP Registry
        • GOVERN APIs
        • Metering & Billing
        • APIOps & Automation
        • API Observability
        • Why Kong?
      • CLOUD
      • Cloud API Gateways
      • Need a self-hosted or hybrid option?
      • COMPARE
      • Considering AI Gateway alternatives?
      • Kong vs. Postman
      • Kong vs. MuleSoft
      • Kong vs. Apigee
      • Kong vs. IBM
      • GET STARTED
      • Sign Up for Kong Konnect
      • Documentation
  • Agents
      • FOR PLATFORM TEAMS
      • Developer Platform
      • Kubernetes & Microservices
      • Observability
      • Service Mesh Connectivity
      • Kafka Event Streaming
      • FOR EXECUTIVES
      • AI Connectivity
      • Open Banking
      • Legacy Migration
      • Platform Cost Reduction
      • Kafka Cost Optimization
      • API Monetization
      • AI Monetization
      • AI FinOps
      • FOR AI TEAMS
      • AI Cost Control
      • AI Governance
      • AI Integration
      • AI Security
      • Agentic Infrastructure
      • MCP Production
      • MCP Traffic Gateway
      • FOR DEVELOPERS
      • Mobile App API Development
      • GenAI App Development
      • API Gateway for Istio
      • Decentralized Load Balancing
      • BY INDUSTRY
      • Financial Services
      • Healthcare
      • Higher Education
      • Insurance
      • Manufacturing
      • Retail
      • Software & Technology
      • Transportation
      • See all Solutions
      • DOCUMENTATION
      • Kong Konnect
      • Kong Gateway
      • Kong Mesh
      • Kong AI Gateway
      • Kong Insomnia
      • Plugin Hub
      • EXPLORE
      • Blog
      • Learning Center
      • eBooks
      • Reports
      • Demos
      • Customer Stories
      • Videos
      • EVENTS
      • AI + API Summit
      • Webinars
      • User Calls
      • Workshops
      • Meetups
      • See All Events
      • FOR DEVELOPERS
      • Get Started
      • Community
      • Certification
      • Training
      • COMPANY
      • About Us
      • Why Kong?
      • We're Hiring!
      • Press Room
      • Investors
      • Contact Us
      • PARTNER
      • Kong Partner Program
      • SECURITY
      • Trust and Compliance
      • SUPPORT
      • Enterprise Support Portal
      • Professional Services
      • Documentation
      • Press Releases

        Kong Names Bruce Felt as Chief Financial Officer

        Read More
  • Pricing
  • Login
  • Get a Demo
  • Start for Free
Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Using Kong Kubernetes Ingress Controller as an API Gateway
Engineering
June 16, 2021
7 min read

Using Kong Kubernetes Ingress Controller as an API Gateway

Viktor Gamov

In this first section, I'll provide a quick overview of the business case and the tools you can use to create a Kubernetes ingress API gateway. If you're already familiar, you could skip ahead to the tutorial section or watch the video at the bottom of this article.

Kubernetes Microservice Architecture

Digital transformation has led to a high velocity of data moving through APIs to applications and devices. Companies with legacy infrastructures are experiencing inconsistencies, failures and increased costs. And most importantly, dissatisfied customers.

All this has led to significant restructuring and modernization of API technologies, especially within IT. A primary strategy is to embrace Kubernetes and decouple monolithic systems. On top of that, IT leadership is tasking DevOps teams to find systems, like an API gateway or Kubernetes ingress controller, to support API traffic growth while minimizing costs.

API gateways are crucial components of microservice architectures. The API gateway acts as a single entry point into a distributed system, providing a unified interface for clients who don’t need to care (or know) that the system aggregates their API call response from multiple microservices.

Some everyday use cases for API gateways include:

  • Routing inbound requests to the appropriate microservice
  • Presenting a unified interface to a distributed architecture by aggregating responses from multiple backend services
  • Transforming microservice responses into the format required by the caller
  • Implementing non-functional/policy concerns such as authentication, logging, monitoring and observability, API rate limiting, IP filtering, and attack mitigation
  • Facilitating deployment strategies such as blue/green or canary releases

API gateways can simplify the development and maintenance of a kubernetes architecture. As a result, freeing up development teams to focus on the business logic of individual components.

Many companies select a Kubernetes API gateway at the beginning or partway through their transition to multi-cloud. Doing so makes it necessary to choose a solution that can function with on-prem services and the cloud.

How an API Gateway works

What is Kubernetes?

Kubernetes is becoming the hosting platform of choice for distributed architectures. It offers auto-scaling, fault tolerance and zero-downtime deployments out of the box.

By providing a widely accepted, standard approach with a carefully designed API gateway, Kubernetes has spawned a thriving ecosystem of products and tools that make it much easier to deploy and maintain complex systems.

Kong Kubernetes Ingress Controller

As a native Kubernetes application, Kong is installed and managed precisely as any other Kubernetes resource. It integrates well with other CNCF projects and automatically updates itself with zero downtime in response to cluster events like pod deployments. There’s also a great plugin ecosystem and native gRPC support.

This tutorial will walk through how easy it is to set up the open source Kong Ingress Controller as a Kubernetes API gateway on a cluster.

API Gateways vs. K8s Ingress Compared: Know Your Best-Fit Solution

Download Now

Use Case: Routing API Calls to Backend Services

To keep this article to a manageable size, I will only cover a single, straightforward use case.

Routing with Kong Gateway

Kong foo/bar routing


I will create a Kubernetes cluster, deploy two dummy microservices, "foo" and "bar," install and configure Kong to route inbound calls to /foo to the foo microservice and send calls to /bar to the bar microservice.

The information in this post barely scratches the surface of what you can do with Kong, but it’s a good starting point.

Prerequisites

There are a few things you’ll need to work through in this article.

In this tutorial, I'm going to create a "real" Kubernetes cluster on DigitalOcean because it’s quick and easy, and I like to keep things as close to real-world scenarios as possible. If you want to work locally, you can use minikube or KinD. You will need to fake a load-balancer, though, either using the minikube tunnel or setting up a port forward to the API gateway.

For DigitalOcean, you will need:

  • A DigitalOcean account
  • A DigitalOcean API token with read and write scopes
  • The doctl command-line tool

To build and push docker images representing our microservices, you will need:

  • Docker
  • An account on Docker Hub

Note: Optional because you can deploy the images I’ve already created.

You will also need kubectl to access the Kubernetes cluster.

Setting Up doctl

After installing doctl, you’ll need to authenticate using the DigitalOcean API token:

$ doctl auth init
...
Enter your access token:  <-- paste your API token, when prompted
Validating token... OK

Create Kubernetes Cluster

Now that you have authenticated doctl, you can create your Kubernetes cluster with this command:

$ doctl kubernetes cluster create mycluster --size s-1vcpu-2gb --count 1

Note: The command spins up a Kubernetes cluster on DigitalOcean. Doing so will incur charges (approximately $0.01/hour, at the time of writing) as long as it is running. Please remember to destroy any resources you create when you finish with them.

The command creates a cluster with a single worker node of the smallest viable size in the New York data center. It's the smallest and simplest cluster (and also the cheapest to run). You can explore other options by running doctl kubernetes –help.

The command will take several minutes to complete, and you should see an output like this:

$ doctl kubernetes cluster create mycluster --size s-1vcpu-2gb --count 1
Notice: Cluster is provisioning, waiting for cluster to be running
....................................................
Notice: Cluster created, fetching credentials
Notice: Adding cluster credentials to kubeconfig file found in "/Users/david/.kube/config"
Notice: Setting current-context to do-nyc1-mycluster
ID                                      Name         Region    Version        Auto Upgrade    Status     Node Pools
4cf2159a-01c1-423c-907d-51f19c3f9a01    mycluster    nyc1      1.20.2-do.0    false           running    mycluster-default-pool

As you can see, the command automatically adds cluster credentials and a context to the ~/.kube/config file, so you should be able to access your cluster using kubectl:

$ kubectl get namespace
NAME              STATUS   AGE
default           Active   24m
kube-node-lease   Active   24m
kube-public       Active   24m
kube-system       Active   24m

Create Dummy Microservices

To represent backend microservices, I’m going to use a trivial Python Flask application that returns a JSON string:

foo.py

from flask import Flask
app = Flask(__name__)

@app.route('/foo')
def hello():
    return '{"msg":"Hello from the foo microservice"}'

if __name__ == "__main__":
    app.run(debug=True, host='0.0.0.0')

This Dockerfile builds a docker image you can deploy:

Dockerfile

FROM python:3-alpine

WORKDIR /app

RUN echo "Flask==1.1.1" > requirements.txt
RUN pip install -r requirements.txt
COPY foo.py .

EXPOSE 5000

CMD ["python", "foo.py"]

The files for our "foo" and "bar" services are almost identical, so I’m only going to show the "foo" files here.

This gist contains files and a script to build foo and bar microservices docker images and push them to Docker Hub as:

  • digitalronin/foo-microservice:0.1
  • digitalronin/bar-microservice:0.1

Note: You don’t have to build and push these images. You can just use the ones I’ve already created.

Deploy Dummy Microservices

You’ll need a manifest that defines a Deployment and a Service for each microservice, both for "foo" and "bar." The manifest for "foo" (again, I’m only showing the "foo" example here, since the "bar" file is nearly identical) would look like this:

foo.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: foo
  template:
    metadata:
      labels:
        app: foo
    spec:
      containers:
      - name: api
        image: digitalronin/foo-microservice:0.1
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: foo-service
  labels:
    app: foo-service
spec:
  ports:
  - port: 5000
    name: http
    targetPort: 5000
  selector:
    app: foo

This gist has manifests for both microservices, which you can download and deploy to your cluster like this:

$ kubectl apply -f foo.yaml
$ kubectl apply -f bar.yaml

Access the Services

You can check that the microservices are running correctly using a port forward:

$ kubectl port-forward service/foo-service 5000:5000

Then, in a different terminal:

$ curl http://localhost:5000/foo
{"msg":"Hello from the foo microservice"}

Ditto for bar, also using port 5000.

Install Kong for Kubernetes

Now that you have our two microservices running in our Kubernetes cluster, let’s install Kong.

There are several options for this, which you will find in the documentation. I’m going to apply the manifest directly, like this:

$ kubectl create -f https://bit.ly/k4k8s

The last few lines of output should look like this:

...
service/kong-proxy created
service/kong-validation-webhook created
deployment.apps/ingress-kong created

Note: You may receive several API deprecation warnings at this point, which you can ignore. Kong's choice of API versions allows Kong Ingress Controller to support the broadest range of Kubernetes versions possible.

Installing Kong will create a DigitalOcean load balancer. It's the internet-facing endpoint to which you will make API calls to access our microservices.

Note: DigitalOcean load balancers incur charges, so please remember to delete your load balancer along with your cluster when you are finished.

Creating the load balancer will take a minute or two. You can monitor its progress like this:

$ kubectl -n kong get service kong-proxy
NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
kong-proxy                LoadBalancer   10.245.14.22    <pending>     80:32073/TCP,443:30537/TCP   71s

Once the system creates the load balancer, the EXTERNAL-IP value will change from <pending> to a real IP address:

$ kubectl -n kong get service kong-proxy
NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
kong-proxy                LoadBalancer   10.245.14.22    167.172.7.192   80:32073/TCP,443:30537/TCP   3m45s

For convenience, let’s export that IP number as an environment variable:

$ export PROXY_IP=167.172.7.192 # <--- use your own EXTERNAL-IP number here

Now you can check that Kong is working:

$ curl $PROXY_IP
{"message":"no Route matched with those values"}

Note: It's the correct response because you haven’t yet told Kong what to do with any API calls it receives.

Configure Kong Gateway

You can use Ingress resources like this to configure Kong to route API calls to the microservices:

foo-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: foo
  namespace: default

spec:
  ingressClassName: kong
  rules:
  - http:
      paths:
      - path: /foo
        pathType: Prefix
        backend:
          service:
            name: foo-service
            port:
              number: 5000

This gist defines ingresses for both microservices. Download and apply them:

$ kubectl apply -f foo-ingress.yaml
$ kubectl apply -f bar-ingress.yaml

Now, Kong will route calls to /foo to the foo microservice and /bar to bar.

You can check this using curl:

$ curl $PROXY_IP/foo
{"msg":"Hello from the foo microservice"}

$ curl $PROXY_IP/bar
{"msg":"Hello from the bar microservice"}

What Else Can You Do?

In this article, I have:

  • Deployed a Kubernetes cluster on DigitalOcean
  • Created Docker images for two dummy microservices, “foo” and “bar”
  • Deployed the microservices to the Kubernetes cluster
  • Installed the Kong Ingress Controller
  • Configured Kong to route API calls to the appropriate backend microservice

I've demonstrated one simple use of Kong, but it’s only a starting point. With Kong for Kubernetes, here are several examples of other things you can do:

Authentication

By adding an authentication plugin to Kong, you can require your API callers to provide a valid JSON Web Token (JWT) and check each call against an Access Control List (ACL) to ensure callers are entitled to perform the relevant operations.

Certificate management

You can enable integration with cert-manager to provision and auto-renew SSL certificates for your API endpoints so that all your API traffic is encrypted as it travels over the public internet.

gRPC support

Kong natively supports gRPC, so it’s easy to add gRPC support to your API.

You can do a lot more with Kong, and I’d encourage you to look at the documentation and start to explore some of the other features.

The API gateway is a crucial part of a microservices architecture, and the Kong Ingress Controller is well suited for this role in a Kubernetes cluster. You can manage it in the same way as any other Kubernetes resource.

Cleanup

Don’t forget to destroy your Kubernetes cluster when you are finished with it so that you don’t incur unnecessary charges:

$ kubectl delete -f https://bit.ly/k4k8s  # <-- this will destroy the load-balancer

$ doctl kubernetes cluster delete mycluster
Warning: Are you sure you want to delete this Kubernetes cluster? (y/N) ? y
Notice: Cluster deleted, removing credentials
...

Note: If you delete the cluster first, the load balancer will be left behind. You can delete any leftover resources via the DigitalOcean web interface.

Have questions or want to stay in touch with the Kong community? Join us wherever you hang out:

⭐ Star us on GitHub

🐦 Follow us on Twitter

🌎 Join the Kong Community

🍻 Join our Meetups

❓ ️Ask and answer questions on Kong Nation

💯 Apply to become a Kong Champion

Once you’ve finished setting up Kong Ingress Controller, you may find these other Kubernetes tutorials helpful:

  • What is CI/CD?
  • Control Plane vs Data Plane
  • Configuring a Kubernetes Application on Kong Konnect
  • Managing Docker Apps With Kubernetes Ingress Controller
  • Implementing Traffic Policies in Kubernetes
KubernetesAPI GatewayIngress

More on this topic

Videos

API Gateway Plugins for Kubernetes Ingress Controller

Videos

Kong Builders - March 11 - Kubernetes Ingress Controller: Expose TCP services with Kong

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
KubernetesAPI GatewayIngress
Viktor Gamov

Recommended posts

Gateway API vs Ingress: The Future of Kubernetes Networking

EngineeringJanuary 31, 2024

As Kubernetes has become the de facto orchestration platform for deploying cloud native applications , networking and traffic management have emerged as pivotal challenges when managing access to services and infrastructure. The core Kubernetes Ing

Peter Barnard

Announcing Kong Operator 2.1

Product ReleasesFebruary 10, 2026

With Kong Ingress Controller, when your Control Plane was hosted in Kong Konnect, and you were using Kubernetes Gateway API, your dataplane, routes, and services were in read-only mode. When using Kong Ingress Controller with Kubernetes Gateway API

Justin Davies

How to Manage Your Kubernetes Services with an API Gateway

Kong Logo
EngineeringApril 9, 2024

Kubernetes is an open-source container orchestration system for automating deployment, scaling, and management of containerized applications. It groups containers into logical units for easy management and discovery.  API gateways sit between client

Peter Barnard

Sending Traffic Across Namespaces with Gateway API

EngineeringMarch 8, 2024

In this blog post, we’ll demonstrate how easy it is to use Gateway API HTTPRoutes to route traffic to workloads deployed in different namespaces in a single Kubernetes cluster — a process that’s easier than ever. Previously, we only had Ingress API

Grzegorz Burzyński

Kubernetes Gateway API: An Engineering Perspective

EngineeringNovember 8, 2023

The Kubernetes Gateway API represents a massive collaborative effort and key advancement in Kubernetes networking. Developed by multiple vendors and community members, the Gateway API provides a robust and extensible new standard for managing ingres

Mattia Lavacca

Kong API Gateway on Kubernetes with Pulumi

EngineeringMarch 15, 2022

The Kong Laboratory – Kong API Gateway The quest for resilience and agility has driven us into the modern age of microservices. Bringing services to market on a microservice architecture demands the utilization of sprawling technology offerings and

Kat Morgan

Prometheus and Grafana APM on Kubernetes Ingress

EngineeringSeptember 21, 2021

While monitoring is an important part of any robust application deployment, it can also seem overwhelming to get a full application performance monitoring (APM) stack deployed. In this post, we'll see how operating a Kubernetes environment using the

Joseph Caudle

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026