See what makes Kong the fastest, most-adopted API gateway
Check out the latest Kong feature releases and updates
Single platform for SaaS end-to-end connectivity
Enterprise service mesh based on Kuma and Envoy
Collaborative API design platform
API and Microservices Security for Gateways, Service Mesh, and Beyond
Call for speakers & sponsors, Kong API Summit 2023!
5 MIN READ
Once upon a time, we had these giant structures where thousands of people would congregate to share ideas, pamphlets filled to the margins with buzz words and cheap, branded t-shirts. Yep, tech conferences – oh what a relic of the past that I miss. It used to be part of my job to attend these.
I loved it all, especially the most recent North American KubeCon hosted in San Diego. But for those who still recall such distant memories, by the last day, most people were rather burnt out or had raspy throats from repeating the same “Hi, my name is Kevin, and I am the ________ at ________” introduction. So, that afternoon on the last day, I wandered off into a random session to just listen and escape the crowd. It was by chance I stumbled into Darren Shepherd’s talk on “K3s Under the Hood: Building a Product-grade Lightweight Kubernetes Distro.”
His slides pop up on the screen: “Short History of K3s.”
And underneath it, the first bullet point: “total fluke.”
Well, say no more. I relate to K3s on a spiritual level now. Or, like my parents like to call me… a happy accident:
To my surprise, K3s ships with an ingress controller by default. While the default proxy/load-balancer works, I needed some of the plugin functionalities that was just not supported unless I used Kong Gateway. So, let’s run through a quick guide on how to start K3s in Ubuntu, configure it to support Kong for Kubernetes and deploy some services/plugins.
First, use the installation script from https://get.k3s.io to install K3s as a service on systemd and openrc-based systems. But we need to add some additional environment variables to configure the installation. The first is `–no-deploy`, which will turn off the existing ingress controller, since we need to deploy Kong to utilize plugins. Second is `–write-kubeconfig-mode`, which allows writing to the kubeconfig file. This is useful for allowing a K3s cluster to be imported into Rancher.
$ curl -sfL https://get.k3s.io | sh -s - --no-deploy traefik --write-kubeconfig-mode 644 [INFO] Finding release for channel stable [INFO] Using v1.18.4+k3s1 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.4+k3s1/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.4+k3s1/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, already exists [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink from /etc/systemd/system/multi-user.target.wants/k3s.service to /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s
To check that the node and pods are all up and running, use `k3s kubectl…` to run the same commands you would in `kubectl`:
$ k3s kubectl get nodes NAME STATUS ROLES AGE VERSION ubuntu-xenial Ready master 4m38s v1.18.4+k3s1 $ k3s kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system metrics-server-7566d596c8-vqqz7 1/1 Running 0 4m30s kube-system local-path-provisioner-6d59f47c7-tcs2l 1/1 Running 0 4m30s kube-system coredns-8655855d6-rjzrq 1/1 Running 0 4m30s
Once K3s is up and running, you can follow the normal steps to install Kong for Kubernetes such as the manifest as shown below:
$ k3s kubectl create -f https://bit.ly/k4k8s namespace/kong created customresourcedefinition.apiextensions.k8s.io/kongclusterplugins.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/tcpingresses.configuration.konghq.com created serviceaccount/kong-serviceaccount created clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created service/kong-proxy created service/kong-validation-webhook created deployment.apps/ingress-kong created
Once the Kong proxy and ingress controller is installed onto the K3s server, you check the service, and you should see the `kong-proxy` LoadBalancer with an external IP:
$ k3s kubectl get svc --namespace kong NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kong-validation-webhook ClusterIP 10.43.157.178 <none> 443/TCP 61s kong-proxy LoadBalancer 10.43.63.117 10.0.2.15 80:32427/TCP,443:30563/TCP 61s
To export the IP into a variable, run:
$ PROXY_IP=$(k3s kubectl get services --namespace kong kong-proxy -o jsonpath={.status.loadBalancer.ingress[0].ip})
And lastly, before we throw up any services behind the proxy, check to see if the proxy is responding:
$ curl -i $PROXY_IP HTTP/1.1 404 Not Found Date: Mon, 29 Jun 2020 20:31:16 GMT Content-Type: application/json; charset=utf-8 Connection: keep-alive Content-Length: 48 X-Kong-Response-Latency: 0 Server: kong/2.0.4 {"message":"no Route matched with those values"}
It should return a 404 because we have yet to add any services in K3s. But as you can see in the header, it is being proxied by the latest version of Kong and shows some additional information like the response latency!
Now, let’s set up an echo-server application in our K3s to demonstrate how to use the Kong Ingress Controller:
$ k3s kubectl apply -f https://bit.ly/echo-service service/echo created deployment.apps/echo created
Next, create an Ingress rule to proxy the echo-server created previously:
$ echo " apiVersion: extensions/v1beta1 kind: Ingress metadata: name: demo spec: rules: - http: paths: - path: /foo backend: serviceName: echo servicePort: 80 " | k3s kubectl apply -f - ingress.extensions/demo created
Test the Ingress rule:
$ curl -i $PROXY_IP/foo HTTP/1.1 200 OK Content-Type: text/plain; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive Date: Mon, 29 Jun 2020 20:31:07 GMT Server: echoserver X-Kong-Upstream-Latency: 0 X-Kong-Proxy-Latency: 1 Via: kong/2.0.4 Hostname: echo-78b867555-jkhhl Pod Information: node name: ubuntu-xenial pod name: echo-78b867555-jkhhl pod namespace: default pod IP: 10.42.0.7 <-- clipped -->
If everything is deployed correctly, you should see the above response. This verifies that Kong can correctly route traffic to an application running inside Kubernetes.
Kong Ingress allows plugins to be executed on a service level, meaning Kong will execute a plugin whenever a request is sent to a specific K3s service, no matter which Ingress path it came from. You can also attach the plugin on the Ingress path. But in the following step, I will be using the rate limiting plugin to restrict IPs from making too many requests on any particular one service:
Create a KongPlugin resource:
$ echo " apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: rl-by-ip config: minute: 5 limit_by: ip policy: local plugin: rate-limiting " | k3s kubectl apply -f - kongplugin.configuration.konghq.com/rl-by-ip created
Next, apply the konghq.com/plugins annotation on the K3s Service that needs rate-limiting:
$ k3s kubectl patch svc echo -p '{"metadata":{"annotations":{"konghq.com/plugins": "rl-by-ip\n"}}}' service/echo patched
Now, any request sent to this service will be protected by a rate-limit enforced by Kong:
$ curl -I $PROXY_IP/foo HTTP/1.1 200 OK Content-Type: text/plain; charset=UTF-8 Connection: keep-alive Date: Mon, 29 Jun 2020 20:35:40 GMT Server: echoserver X-RateLimit-Remaining-Minute: 4 X-RateLimit-Limit-Minute: 5 RateLimit-Remaining: 4 RateLimit-Limit: 5 RateLimit-Reset: 20 X-Kong-Upstream-Latency: 5 X-Kong-Proxy-Latency: 2 Via: kong/2.0.4
From here, the possibilities are endless, since you can add any plugin to any Ingress path and/or service. To find all the plugins, check out Kong Hub. And if you need a gentle push in a fun direction, check out the AWS Lambda plugin to help invoke an AWS Lambda function from Kong! It is really handy with home automation projects where you run K3s on a Raspberry Pi.
Thanks for following along. If you have any additional questions, feel free to email me at kevin.chen@konghq.com.
Share Post