John Load Balancers,
When I started my journey into Kubernetes, you were always there for me when I needed to expose a service externally. We started small with just exposing one service, and you were dependable and easy to set up. But things have changed. My application has grown, and now the cluster has 10 services that need to be exposed externally. This has made communication difficult. I wonder what will happen when we have 50 services? The question of our relationship’s scalability prompted me to write this letter. Please try to understand – it’s not you, it’s me. I need something more scalable at large. I need ingress.
But Real Talk, What Is the Ingress Resource?
The ingress resource is an API object. I repeat: it is an API object and NOT a service like the load balancer. This API object simply defines rules that map external traffic to resources within our cluster. To paint an imaginary picture, let’s assume we have a finance API with two services we would like to expose externally: billing and ordering. This is what the ingress resource would look like:
- host: example.com
- path: /bills
- path: /orders
For the two services, we create a path that points to the serviceName and servicePort. But if we were to go ahead and
kubectl apply -f … this, nothing would happen. That is because the Kubernetes does not know how to enforce these rules.
In order for our ingress resource to work, the cluster must have an ingress controller running. The ingress controller is what reads and enforces the ingress resource information. And since ingress controllers are not started automatically with a cluster, we get the luxury to choose which one to use. One of the many options is the ingress controller created by Kong, called Kong for Kubernetes (K4K8S). K4K8S utilizes our bread-and-butter Kong API gateway as the underlying proxy to interpret and execute ingress rules. To get started, all you have to do is run:
$ kubectl apply -f https://bit.ly/k4k8s
This will deploy two components:
- Kong: the open source gateway
- Controller: a daemon process that integrates with the Kubernetes platform and configures Kong
In addition, you’ll notice a few Kong CRDs. The CRDs help unlock all that the Kong Gateway has to offer when deploying in Kubernetes. KongPlugins allows you to browse our plugin hub and configure them for each ingress or service. KongConsumer and KongCredential provides authentication capabilities. And lastly, KongIngress unlocks the gateway’s different routing, load balancing and health check capabilities.
So, what’s next? Well, if you deployed the ingress controller and want a step-by-step tutorial on how to set up a service and route traffic via ingress, check out our GitHub repository demo here. It’ll take less than five minutes, I promise! You’ll never have to manage 50 load balancers in the future 😊 So, let me wrap up my Dear John letter…
So Load Balancer, I’ve got a job to do, too. Where I’m going, you can’t follow. What I’ve got to do, you can’t be any part of. LoadBalancer, I’m no good at being noble, but it doesn’t take much to see that the problems of three little services don’t amount to a hill of beans in this crazy cluster. Someday you’ll understand that.