Engineering
July 1, 2020
4 min read

How to Use Kong Gateway With K3s For IoT and Edge Computing on Kubernetes

Kevin Chen

Once upon a time, we had these giant structures where thousands of people would congregate to share ideas, pamphlets filled to the margins with buzz words and cheap, branded t-shirts. Yep, tech conferences - oh what a relic of the past that I miss. It used to be part of my job to attend these.

I loved it all, especially the most recent North American KubeCon hosted in San Diego. But for those who still recall such distant memories, by the last day, most people were rather burnt out or had raspy throats from repeating the same "Hi, my name is Kevin, and I am the ________ at ________" introduction. So, that afternoon on the last day, I wandered off into a random session to just listen and escape the crowd. It was by chance I stumbled into Darren Shepherd's talk on "K3s Under the Hood: Building a Product-grade Lightweight Kubernetes Distro."

His slides pop up on the screen: "Short History of K3s."

And underneath it, the first bullet point: "total fluke."

Well, say no more. I relate to K3s on a spiritual level now. Or, like my parents like to call me… a happy accident:

Jokes aside, check out Darren's talk here on YouTube since it is really an amazing alternative. K3s is a lightweight distribution of Kubernetes geared towards IoT and Edge Computing built by Rancher Labs (and fully open source). I won't dive into too much of the details of why K3s is important, since his talk highlights why it was created. It is up to you to decide if it fits what you are trying to accomplish. For me, the tiny binary and optimization for ARM makes it perfect for my IoT home projects. But then it got me thinking, how do I get a Kong gateway running on K3s to expose the services within the K3s server?

To my surprise, K3s ships with an ingress controller by default. While the default proxy/load-balancer works, I needed some of the plugin functionalities that was just not supported unless I used Kong Gateway. So, let's run through a quick guide on how to start K3s in Ubuntu, configure it to support Kong for Kubernetes and deploy some services/plugins.

Configure K3s to Deploy Kong Ingress Controller

First, use the installation script from https://get.k3s.io to install K3s as a service on systemd and openrc-based systems. But we need to add some additional environment variables to configure the installation. The first is `–no-deploy`, which will turn off the existing ingress controller, since we need to deploy Kong to utilize plugins. Second is `–write-kubeconfig-mode`, which allows writing to the kubeconfig file. This is useful for allowing a K3s cluster to be imported into Rancher.

To check that the node and pods are all up and running, use `k3s kubectl…` to run the same commands you would in `kubectl`:

Install Kong for Kubernetes on K3s

Once K3s is up and running, you can follow the normal steps to install Kong for Kubernetes such as the manifest as shown below:

Once the Kong proxy and ingress controller is installed onto the K3s server, you check the service, and you should see the `kong-proxy` LoadBalancer with an external IP:

To export the IP into a variable, run:

And lastly, before we throw up any services behind the proxy, check to see if the proxy is responding:

It should return a 404 because we have yet to add any services in K3s. But as you can see in the header, it is being proxied by the latest version of Kong and shows some additional information like the response latency!

Set Up Your K3s Application to Test Kong Ingress Controller

Now, let's set up an echo-server application in our K3s to demonstrate how to use the Kong Ingress Controller:

Next, create an Ingress rule to proxy the echo-server created previously:

Test the Ingress rule:

If everything is deployed correctly, you should see the above response. This verifies that Kong can correctly route traffic to an application running inside Kubernetes.

Install a Rate-Limiting Plugin With Kong Ingress

Kong Ingress allows plugins to be executed on a service level, meaning Kong will execute a plugin whenever a request is sent to a specific K3s service, no matter which Ingress path it came from. You can also attach the plugin on the Ingress path. But in the following step, I will be using the rate limiting plugin to restrict IPs from making too many requests on any particular one service:

Create a KongPlugin resource:

Next, apply the konghq.com/plugins annotation on the K3s Service that needs rate-limiting:

Now, any request sent to this service will be protected by a rate-limit enforced by Kong:

From here, the possibilities are endless, since you can add any plugin to any Ingress path and/or service. To find all the plugins, check out Kong Hub. And if you need a gentle push in a fun direction, check out the AWS Lambda plugin to help invoke an AWS Lambda function from Kong! It is really handy with home automation projects where you run K3s on a Raspberry Pi.

Thanks for following along. If you have any additional questions, feel free to email me at kevin.chen@konghq.com.