See what makes Kong the fastest, most-adopted API gateway
Check out the latest Kong feature releases and updates
Single platform for SaaS end-to-end connectivity
Enterprise service mesh based on Kuma and Envoy
Collaborative API design platform
API and Microservices Security for Gateways, Service Mesh, and Beyond
Call for speakers & sponsors, Kong API Summit 2023!
4 MIN READ
We love abstractions. We want to make things easier for developers, teams and end users. In doing that, sometimes we build things a little bit too complex for those who don’t already understand the pain points for which the abstraction layers were built. Kubernetes is an example of this; it solves a very real, very painful problem, but it is notoriously difficult to wrap your head around. The scale of the work it does for us means that it covers so much ground that one person can’t possibly have the necessary expertise to babysit every aspect.
Fortunately, an entire ecosystem has formed around Kubernetes and cloud native development. Tools upon tools, open source or commercial, have sprung up to relieve some of the pain points involved in application development today. It doesn’t have to be so difficult anymore, and you may not even need to learn much to take advantage of it. These tools are built for the cloud, first and foremost, solving different problems while using skills your team probably already has.
One such tool is Kong Gateway, allowing you to manage the communications between your microservices and your clients at lightning speed. It even includes a Kubernetes Ingress Controller, using native CRDs so you can tightly control any aspect of an API gateway that you might want.
We also have ways to make infrastructure deployment faster and more familiar for developers. Pulumi is infrastructure as code. When I say infrastructure as code, I mean really code—you can stand up whatever infrastructure you want on whatever cloud provider you like. Plus, deploy your application in a language you’re probably already familiar with—Python, Typescript, C#, and of course, because we’re cloud native, Go.
Standing up your Kubernetes cluster on whatever cloud provider you like is relatively low impact, requiring only about 50 lines of Python code. This can even live right alongside your application code if you want. It’s the way cloud engineering was meant to be done, with minimal complexity and maximum impact.
Standing up a vanilla Kubernetes cluster isn’t all you want to do, though, is it? Surely you want to take advantage of some of the fancy cloud native tooling I mentioned earlier to make managing that cluster less painful and more secure? Perhaps, implementing an API gateway? Well, it turns out Pulumi and Kong work really well together. In Python, let’s look at how to stand up a Kubernetes cluster on DigitalOcean and deploy Kong Ingress Controller with Pulumi! For this, you’ll need a Pulumi account and the Pulumi CLI installed. You will also need a Digital Ocean account and Pulumi set up to access Digital Ocean.
Create a new directory called kong-ingress and initialize a new Pulumi project inside of it with pulumi new python -y. Now activate your virtual environment with source /venv/bin/activate and run pip3 install pulumi_digitalocean pulumi_kubernetes to install your dependencies. We’re ready to go, and from here on out, we’ll be adding code to __main__.py.
First, get our config values to define what our cluster looks like and tell Digital Ocean to use the latest version of Kubernetes.
import pulumi_digitalocean as do
import pulumi_kubernetes as k8s
config = pulumi.Config();
clusterName = config.require('cluster-name'); # "my-cluster"
clusterRegion = config.require('region'); # "nyc3"
nodePoolName = config.require('node-pool-name'); # "my-cluster-pool"
nodeSize = config.require('node-size'); # "s-1vcpu-2gb"
nodeCount = config.require('node-count'); # "4"
nodeTag = config.require('tag'); # "matty-workshop"
# Grab the latest version available from DigitalOcean.
ver = do.get_kubernetes_versions()
Here, we’re provisioning the cluster itself, setting up the Kubernetes provider for Pulumi and adding a namespace for better organization.
cluster = do.KubernetesCluster(
k8s_provider = k8s.Provider(
kubeconfig=cluster.kube_configs.apply(lambda c: c.raw_config),
ns = k8s.core.v1.Namespace(
Below is all it takes to deploy Kong using a Helm chart while applying a transformation to it simultaneously. In this case, we’re using a small helper function to look at the custom resource definition and remove the “status” object so that the Helm chart doesn’t return it. It is also possible to deploy via config files without much more code than this.
def remove_status(obj, opts):
if obj["kind"] == "CustomResourceDefinition":
# Deploying Kong via Helm chart.
kong_ingress = k8s.helm.v3.Chart(
Finally, we’re getting the Kong Ingress resource that we’ve just deployed and exporting its ingress URL so that we know where it is.
svc = kong_ingress.get_resource('v1/Service', "platform/kong-ingress-kong-proxy")
pulumi.export("url", svc.status.apply(lambda s: s.load_balancer.ingress.ip))
To run this, set your config values from the terminal.
$ pulumi config set cluster-name my-cluster
$ pulumi config set node-count 4
$ pulumi config set node-pool-name my-cluster-pool
$ pulumi config set node-size s-1vcpu-2gb
$ pulumi config set region nyc3
$ pulumi config set tag my-cluster
Then run pulumi up and watch! In a few minutes, you’ll have a fully functional Kubernetes cluster with four nodes deployed to Digital Ocean and running a Kong Ingress Controller, all in around 60 lines of code. When you’re ready to tear this down, run pulumi destroy and watch it disappear! If you want to copy and paste the running code, you can find it in this GitHub repo.
If you want to go a bit further, click here or watch the video below to learn how to use Pulumi to provision an AWS EC2 instance and configure it as an API gateway data plane for Kong Konnect!