See what makes Kong the fastest, most-adopted API gateway
Check out the latest Kong feature releases and updates
Single platform for SaaS end-to-end connectivity
Enterprise service mesh based on Kuma and Envoy
Collaborative API design platform
How to Scale High-Performance APIs and Microservices
Call for speakers & sponsors, Kong API Summit 2023!
5 MIN READ
From the Kong API Gateway perspective, using Consul as its Service Discovery infrastructure is one of the most well-known and common integration use cases. With this powerful combination more flexible and advanced routing policies can be implemented to address Canary Releases, A/B testings, Blue-Green deployments, etc. totally abstracted from the Gateway standpoint without having to deal with lookup procedures.
This article focuses on integrating Kong for Kubernetes (K4K8S), the Kong Ingress Controller based on the Kong API Gateway, and Consul Service Discovery running on a Kubernetes EKS Cluster. Kong for Kubernetes can implement all sorts of policies to protect the Ingresses defined to expose Kubernetes services to external Consumers including Rate Limiting, API Keys, OAuth/OIDC grants, etc.
The following diagram describes the Kong for Kubernetes Ingress Controller and Consul Service Discovery implementing a Canary Release:
This section assumes you have a Kubernetes Cluster with both Consul and Kong for Kubernetes installed. This HashiCorp link can help you spin up a Consul Kubernetes deployment. Similarly, Kong provides the following link to install Kong for Kubernetes.
After getting your Kubernetes Cluster installed with Consul and Kong for Kubernetes deployed, we’re ready to start the 5-step configuration process:
First of all, let’s configure the Kubernetes in order to consume Consul’s primary query instance based on DNS. The configuration depends on the DNS provided by the Kubernetes engine you are using. Please, refer to this link to check how to configure KubeDNS or CoreDNS.
Once configured, DNS requests in the form <consul-service-name>.service.consul will resolve for Consul Services. As an example, here are the configuration steps for CoreDNS:
<consul-service-name>.service.consul
Get the Consul DNS’ Cluster IP:
kubectl get service consul-consul-dns -n hashicorp -o jsonpath='{.spec.clusterIP}' 10.105.175.26
Edit the CoreDNS ConfigMap to include a forward definition that points to the Consul DNS’s Kubernetes Services.
kubectl edit configmap coredns -n kube-system # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } consul { errors cache 30 forward . 10.105.175.26 } kind: ConfigMap metadata: creationTimestamp: "2020-06-19T13:42:16Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:Corefile: {} manager: kubeadm operation: Update time: "2020-06-19T13:42:16Z" name: coredns namespace: kube-system resourceVersion: "178" selfLink: /api/v1/namespaces/kube-system/configmaps/coredns uid: 698c5d0c-998e-4aa4-9857-67958eeee25a
For the purpose of this article we’re going to create our Kubernetes Deployments using basic Docker Images for both Current and Canary releases available in http://hub.docker.com. Both Images return the current datetime, differing from each other by the text used. As expected, after the deployment, you should see two Kubernetes Services: benigno-v1 and benigno-v2.
benigno-v1
benigno-v2
The Current application release can be deployed using the following declaration:
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: benigno-v1 spec: replicas: 1 selector: matchLabels: app: benigno version: v1 template: metadata: labels: app: benigno version: v1 spec: containers: - name: benigno image: claudioacquaviva/benigno ports: - containerPort: 5000 ---- apiVersion: v1 kind: Service metadata: name: benigno-v1 labels: app: benigno-v1 spec: type: ClusterIP ports: - port: 5000 name: http selector: app: benigno version: v1 EOF
The Canary Release is deployed using the command below:
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: benigno-v2 spec: replicas: 1 selector: matchLabels: app: benigno version: v2 template: metadata: labels: app: benigno version: v2 spec: containers: - name: benigno image: claudioacquaviva/benigno_rc ports: - containerPort: 5000 ---- apiVersion: v1 kind: Service metadata: name: benigno-v2 labels: app: benigno-v2 spec: type: ClusterIP ports: - port: 5000 name: http selector: app: benigno version: v2 EOF
Now, we have to register a Consul Service based on both Kubernetes Services we have deployed. The benigno1 Consul Service will have both Kubernetes Services’ Cluster IPs configured with different weights. So, any DNS request to it will return one of the IPs applying the weights defined.
benigno1
In order to get the Kubernetes Services’ Cluster IPs run:
kubectl get service --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S AGE default benigno-v1 ClusterIP 10.100.225.125 5000/TCP 116s default benigno-v2 ClusterIP 10.100.148.236 5000/TCP 12s …
Then create two files as described below using the Cluster IPs. Notice the weights used saying that the Consul DNS will return the Canary Release IP address for only 20% of the requests:
ben0.json:
{ "ID": "ben0", "Name": "benigno1", "Tags": ["primary"], "Address": "10.100.225.125", "Port": 5000, "weights": { "passing": 80, "warning": 1 } "proxy": { "local_service_port": 5000 } }
ben1.json:
{ "ID": "ben1", "Name": "benigno1", "Tags": ["secondary"], "Address": "10.100.148.236", "Port": 5000, "weights": { "passing": 20, "warning": 1 } "proxy": { "local_service_port": 5000 } }
Expose Consul using port-forward so we can send requests to it and get the Consul Service registered. On one local terminal run:
port-forward
kubectl port-forward service/consul-connect-consul-server -n hashicorp 8500:8500
Open another local terminal to send the requests using the files created before. We’re using HTTPie to send the requests. Feel free to use any other tool.
http put :8500/v1/agent/service/register < ben0.json http put :8500/v1/agent/service/register < ben1.json
After registering the Consul Service, any DNS request to benigno1.service.consul will return one of the IPs applying the weight policy described. Now, we create an External Service to define a specific Kubernetes reference to the Consul Service.
cat <<EOF | kubectl apply -f - kind: Service apiVersion: v1 metadata: name: benigno1 spec: ports: - protocol: TCP port: 5000 type: ExternalName externalName: benigno1.service.consul EOF
Finally we’re going to expose the Canary Release through an Ingress managed by Kong for Kubernetes. Using the External Service created before we abstract both Application releases under the Consul Service benigno1.service.consul name.
benigno1.service.consul
cat <<EOF | kubectl apply -f - apiVersion: extensions/v1beta1 kind: Ingress metadata: name: benignoroute annotations: kubernetes.io/ingress.class: kong konghq.com/strip-path: "true" spec: rules: - http: paths: - path: /benignoroute backend: serviceName: benigno1 servicePort: 5000 EOF
You can test the Ingress sending a request like this:
$ http <K4K8S-EXTERNALIP>:<K4K8S-PORT>/benignoroute HTTP/1.1 200 OK Connection: keep-alive Content-Length: 36 Content-Type: text/html; charset=utf-8 Date: Wed, 16 Sep 2020 20:37:22 GMT Server: Werkzeug/1.0.1 Python/3.8.3 Via: kong/2.1.3 X-Kong-Proxy-Latency: 4 X-Kong-Upstream-Latency: 2 Hello World, Benigno, Canary Release
Start a loop to see the Canary Release in action:
while [ 1 ]; do curl http://<K4K8S-EXTERNALIP>:<K4K8S-PORT>/benignoroute; sleep 1; echo; done
Kong for Kubernetes provides CRDs not just to define Ingresses but also apply typical policies defined at the Ingress Controller layer. Feel free to experiment further policy implementations like caching, log processing, OIDC-based authentication, GraphQL integration and more with the extensive list of plugins provided by Kong.
Share Post
Learn how to make your API strategy a competitive advantage.