kong-summit-glow

By on May 8, 2018

Announcing the Kubernetes Ingress Controller for Kong

Today we are excited to announce the Kubernetes Ingress Controller for Kong.

Container orchestration is rapidly changing to meet the needs of software infrastructure that demands more reliability, flexibility, and efficiency than ever. At the forefront of these tools is Kubernetes, a container orchestrator that enables operations and applications teams to deploy and scale workloads that meet these needs while still enabling developers with self-service and a great developer experience.

Critical to these workloads, however, is a networking stack that can support highly dynamic deployments across a clustered container orchestrator at scale.

Kong is a high performance, open source API gateway, traffic control, and microservice management layer that supports the demanding networking requirements these workloads have. Kong, however, does not force teams into a one-size-fits-all solution. To serve traffic for a deep ecosystem of software and enterprises, Kong comes supplied with a rich plugin ecosystem that extends Kong with features for authentication, traffic control, and more.

Deploying Kong onto Kubernetes has always been an easy process, but integration of services on Kubernetes with Kong was a manual process. That’s why we are excited to announce the Kong Ingress Controller for Kubernetes.

By integrating with the Kubernetes Ingress Controller spec, Kong ties directly to the Kubernetes lifecycle. As applications are deployed and new services are created, Kong will automatically live configure itself to serve traffic to these services.

This follows the Kubernetes philosophy of using declarative resources to define what we want to happen, rather than the historical imperative model of configuring servers how we want with a series of steps. In short, we define the end state. The ingress controller and Kubernetes advances the cluster to that state, rather than the end state being a side effect of actions we perform on the cluster.

This automatic configuration can be costly when using load balancers that require a restart, reload, or significant time to update routes. This is the case with the open source nginx ingress controller, which is based on a configuration file that must be reloaded with every change. In a highly available, dynamic environment, this configuration reload can result in downtime or unavailable routes while nginx is being reconfigured. The open source edition of Kong and the Kong Ingress Controller has a full management layer and API, live configuration of targets and upstreams, and a durable, scalable state storage using either Postgres or Cassandra that ensures every Kong instance is synced without delay or downtime.

Setting up the Kong Ingress Controller

Next, we’ll show you how easy it is to set up the Kong ingress controller. We have a GitHub example and will walk you through the steps below. You can also follow Kong CTO and Co-Founder, Marco Palladino, through the setup steps in this demo presentation.

 

Getting started is just a matter of installing all of the required Kubernetes manifests, such as the ingress controller Deployment itself, a fronting service, and all of the RBAC components needed for Kong to access the Kubernetes API paths it needs to successfully work.

These manifests will work on any Kubernetes cluster. If you are just getting started, we recommend using minikube for development. Minikube is an officially-provided single node Kubernetes cluster that runs on a virtual machine on your computer, and is the easiest way to get started working with Kubernetes as an application developer.

Installation is as simple as running the following command:

curl https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-postgres.yaml \
| kubectl create -f -

After installing the Kong Ingress Controller, we can now begin deploying services and ingress resources so that Kong can begin serving traffic to our cluster resources. Let’s deploy a test service into our cluster that will serve headers and basic information about our pod back to us.

curl https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/manifests/dummy-application.yaml \
| kubectl create -f -

This deploys our application, but we still need an Ingress Resource to serve traffic to it. We can create one for our dummy application with the following manifest:

$ echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-bar
spec:
rules:
- host: foo.bar
http:
paths:
- path: /
backend:
serviceName: http-svc
servicePort: 80
" | kubectl create -f -

With our Kong Ingress Controller and our application deployed, we can now start serving traffic to our application.

$ export PROXY_IP=$(minikube   service -n kong kong-proxy --url --format "{{ .IP }}" | head -1)
$ export HTTP_PORT=$(minikube  service -n kong kong-proxy --url --format "{{ .Port }}" | head -1)

$ curl -vvvv $PROXY_IP:$HTTP_PORT -H "Host: foo.bar"

Adding Plugins

Plugins in the Kong Ingress Controller are exposed as Custom Resource Definitions (CRDs). CRDs are third party API objects on the Kubernetes API server that operators can define, allowing for arbitrary data to be used in custom control loops such as the Kong Ingress Controller.

Let’s add the rate limiting plugin to our Kong example, and tie it to our ingress resource. Plugins map one-to-one with ingress resources, allowing us to have fine grained control over how we apply plugins to our upstreams.

$ echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: add-ratelimiting-to-route
config:
hour: 100
limit_by: ip
second: 10
" | kubectl create -f -

$ kubectl patch ingress foo-bar \
-p '{"metadata":{"annotations":{"rate-limiting.plugin.konghq.com":"add-ratelimiting-to-route\n"}}}'

Now that this is applied, we can cURL our service endpoint again and get the following response:

$ curl -vvvv $PROXY_IP:$HTTP_PORT -H "Host: foo.bar"

> GET / HTTP/1.1

> Host: foo.bar

> User-Agent: curl/7.54.0

> Accept: */*

>

< HTTP/1.1 200 OK

< Content-Type: text/html; charset=ISO-8859-1

< Transfer-Encoding: chunked

< Connection: keep-alive

< X-RateLimit-Limit-hour: 100

< X-RateLimit-Remaining-hour: 99

< X-RateLimit-Limit-second: 10

< X-RateLimit-Remaining-second: 9

We immediately see that our rate limiting headers are applied and available for this endpoint! If we deploy another service and ingress, our rate limit plugin will not apply unless we create a new KongPlugin resource with its own configuration and add the annotation to our new ingress.

Now we have a fully featured API gateway ingress stack, complete with an application and rate limiting for it. You’re ready to take control of your microservices with Kong!

Keep up with the development of the Kong ingress controller, request features, or report issues on our Github repository. We look forward to seeing how you use it!