Claudio Acquaviva

By on June 29, 2022

Kong Gateway Enterprise and Amazon EKS Anywhere Bare Metal

Power up application modernization and migration using Kong Gateway Enterprise and Amazon EKS Anywhere Bare Metal

One of the most critical requirements for an Application Modernization project is to support workloads running on multiple platforms. In fact, such projects naturally include in their transformation process migrating workloads approach using the hybrid model.

Another typical technical decision that commonly comes up is the adoption of Kubernetes as the main platform for the existing services and microservices originated by the modernization project.

Kubernetes has become a de facto standard as a new platform for developing modern applications. Among the vast collection of technological resources offered by Kubernetes, perhaps the most important is the provision of a standardized environment for the complete life cycle of an application. In other words: regardless of the context where the application will be installed, whether once again On Premises or Cloud Computing, there will be a distribution of Kubernetes available.

This is exactly the purpose of the Amazon EKS Anywhere (EKS-A) service: to provide the well known Amazon EKS technology across multiple environments.


Kong Gateway Enterprise Principles

The principles of the Kong Gateway Enterprise provide natively several capabilities which are totally aligned with the flexibility provided by EKS Anywhere. For example:

  • Architectural freedom: Kong connects services across any platform including Linux-based OSes, Docker and any Kubernetes flavor like Amazon EKS and Amazon EKS Anywhere.
  • Kubernetes native: Kong fully supports Kubernetes including all capabilities provided by the platform such as CRDs, HPA (“Horizontal Pod Autoscaler”), Cert-Manager, Knative, Helm Charts, Operators, etc.
  • Hybrid and multi-cloud: providing a multi-platform engine is not enough, it has to support several platforms at the same time across any infrastructure and any deployment pattern such as hybrid or multi-cloud configurations.

In fact, a Kong Gateway hybrid deployment should consider the complete separation of the control plane (CP) and data plane (DP). In this context, the control plane, responsible for administration tasks, and the data plane, exclusively used by API consumers, would be running in completely separate environments.

In summary, the synergic combination of both technologies results in a powerful platform to run critical applications.

Amazon EKS Anywhere Bare Metal and Kong Gateway Enterprise

Amazon EKS Anywhere is based on the concept of providers. Each provider is capable of deploying EKS clusters in a specific environment. The first Amazon EKS Anywhere version, released in September 2021, supported Docker for local development clusters and VMware vSphere for production clusters.

With this new version, Amazon EKS Anywhere provides another deployment target to support bare metal.

Ultimately, a Kong Gateway Enterprise Hybrid Deployment on Amazon EKS and Amazon EKS Anywhere platforms is represented on the diagram below:

  • Kong Control Plane runs on an EKS cluster in AWS Cloud. It is a stable environment used only by admins to create APIs, policies and API documentation.
  • Kong Data Plane #1 runs on an on-prem EKS Anywhere cluster deployed on bare metal. It exposes the services and microservices deployed in all environments we may have, including application servers, legacy systems and other EKS Anywhere clusters.
  • Kong Data Plane #2 runs on another EKS Anywhere Cluster, this time based on VMware vSphere. It plays similar role as Kong DP #1, exposing the local service and applications.
  • Kong Data Plane #3 runs on AWS Cloud along with Kong Control Plane, but on a different AWS region. It supports the microservices and services that have been migrated from the on-prem environment or new microservices developed in cloud environments like ECS, EC2/ASG, etc.
  • All Data Planes leverage AWS services like Cognito for OIDC-based authentication processes, Opensearch for log processing, etc. to implement policies to make sure the microservices or services are being safely consumed.
  • The communication between the control plane and the data planes is based on mTLS tunnels. The control plane publishes APIs and policies across all existing data planes using a specific tunnel. On the other hand, using another tunnel, each data plane reports back the control plane with metrics regarding API request processing.

EKS Anywhere is build based on three main technologies:

  • EKS Distro (Amazon’s open source distribution for Kubernetes)
  • Cluster API Project, a Kubernetes project focused on declarative, Kubernetes-style APIs to cluster creation, configuration and management.
  • Tinkerbell, project to provision and manage bare metal. The specific CAPT (Cluster-API-Provider-Tinkerbell) provider is embedded in the EKS Anywhere infrastructure.

The following diagram describes an Amazon EKS Anywhere Bare Metal and Kong Gateway Enterprise Architecture Topology:

For a better diagram description we’re going to detail the two main processes to get a Hybrid Kong Gateway Enterprise deployment running the Control Plane on EKS and Data Plane on EKS Anywhere:

  1. EKS Anywhere Cluster Deployment
  2. Kong Gateway Enterprise Hybrid Deployment

EKS Anywhere Cluster Deployment

An EKS Anywhere cluster deployment process starts with an Admin Server node. This server has two main components:

  • Tinkerbell Stack, responsible for the bare metal servers provisioning including remote iPXE remote boots and Operating System provisioning.
  • EKS Anywhere Bootstrap Cluster. This is a temporary cluster, based on the Cluster API Bootstrap Cluster definition, responsible for provisioning the EKS Anywhere Workload Cluster (also based on the Cluster API definitions) running on both Bare Metal servers. The Workload Cluster comprehends the typical EKS Control Plane and Worker Node.

The EKS Anywhere CLI used in the Admin Server node abstracts all the underlying components. Its fundamental command takes as input two configuration files to create both Bootstrap and Workload Clusters. Here’s an example:

eksctl-anywhere create cluster -f ClusterSpec.yml --hardware-csv Hardware.csv

The two configuration files are:

ClusterSpec.yml : it describes how the EKS Anywhere Workload Cluster should be created by the Bootstrap Cluster.

Hardware.csv : it lists where the Bare Metal servers are, including their MAC Addresses, so the Tinkerbell Stack can use iPXE remote boot to provision them.

During the cluster creation process, the EKS Anywhere CLI creates a bootstrap Kind cluster on the Admin Server node and installs Cluster-API (CAPI) and Cluster-API-Provider-Tinkerbell (CAPT) components.

CAPI creates cluster node resources and CAPT maps hardware to nodes and powers up the corresponding bare metal servers. The bare metal servers iPXE boot and run the OS provisioning process by communicating with the Tinkerbell infrastructure. Both servers are provisioned with Ubuntu OS.

The Cluster management resources are transferred from bootstrap cluster to target EKS Anywhere workload cluster. Finally, the local bootstrap Kind cluster is deleted from the Admin Server node.

Kong Gateway Enterprise Hybrid Deployment

After executing the eksctl-anywhere create cluster command, the two bare metal servers are ready to receive the Kong Gateway deployment, more precisely the Kong Data Plane.

Control Plane Deployment

Before deploying the Kong Data Plane on the EKS Anywhere Worker Node, we need to take care of the Kong Control Plane. As described in the diagram, it will be running on a regular Amazon EKS Cluster in AWS Cloud.

The eksctl CLI can be used to get a fresh Amazon EKS Cluster available. For example:

eksctl create cluster --name kong-control-plane --version 1.21 --nodegroup-name standard-workers --node-type t3.large --nodes 1

And then, using the Kong Gateway Helm Charts, the Control Plane can be deployed. Here’s an example of a Helm command:

helm install kong kong/kong -n kong \
--set ingressController.enabled=true \
--set ingressController.installCRDs=false \
--set ingressController.image.repository=kong/kubernetes-ingress-controller \
--set ingressController.image.tag=2.4 \
--set image.repository=kong/kong-gateway \
--set image.tag= \
--set env.database=postgres \
--set env.role=control_plane \
--set env.cluster_cert=/etc/secrets/kong-cluster-cert/tls.crt \
--set env.cluster_cert_key=/etc/secrets/kong-cluster-cert/tls.key \
--set cluster.enabled=true \
--set cluster.type=LoadBalancer \
--set cluster.tls.enabled=true \
--set cluster.tls.servicePort=8005 \
--set cluster.tls.containerPort=8005 \
--set clustertelemetry.enabled=true \
--set clustertelemetry.type=LoadBalancer \
--set clustertelemetry.tls.enabled=true \
--set clustertelemetry.tls.servicePort=8006 \
--set clustertelemetry.tls.containerPort=8006 \
--set proxy.enabled=true \
--set proxy.type=ClusterIP \
--set admin.enabled=true \
--set admin.http.enabled=true \
--set admin.type=LoadBalancer \
--set enterprise.enabled=true \
--set portal.enabled=false \
--set portalapi.enabled=false \
--set enterprise.rbac.enabled=false \
--set enterprise.smtp.enabled=false \
--set manager.enabled=true \
--set manager.type=LoadBalancer \
--set secretVolumes[0]=kong-cluster-cert \
--set postgresql.enabled=true \
--set postgresql.postgresqlUsername=kong \
--set postgresql.postgresqlDatabase=kong \
--set postgresql.postgresqlPassword=kong \
--set enterprise.license_secret=kong-enterprise-license

The most important settings for the Kong Control Plane are:

  • env.role=control_plane  to configure this Kong Gateway instance as the Control Plane
  • cluster.type=LoadBalancer to expose the Control Plane with a load balancer, as the Data Planes refer to it to get the mTLS based encrypted tunnel in place.
  • cluster.tls.servicePort=8005  as the API and policy publication mTLS tunnel port
  • clustertelemetry.type=LoadBalancer to expose the telemetry endpoint to the Data Plane
  • clustertelemetry.tls.servicePort=8006  as the tunnel port that the Data Plane will use to report back the Control Plane with API consumption related metrics.

Check the Kong Workshop available in AWS Portal for a more detailed description of the Kong Gateway Enterprise Hybrid Deployment.

In fact, Kong Gateway can be deployed not just with Helm Charts but with other usual Kubernetes mechanisms, including YAML declarations, Operators, etc. Likewise, combining these tools with IaC (Infrastructure as Code) technologies can be very helpful to automate the deployment processes. For example, AWS CloudFormation and AWS CDK (Cloud Development Kit) are great services to provision Kong Gateway across multiple platforms including not just EKS but ECS, EC2/ASG, etc. Check the Kong and CDK tutorial and CDK Construct Library for Kong for more information.

After submitting the Helm command the Control Plane should be available for the administrators. The Control Plane can configured with new APIs and Policies through some mechanisms:

  • REST Admin APIs: all admin tasks can be executed with an extensive list of Admin APIs.
  • decK: admins can manage Kong Gateway’s configuration in a declarative way.
  • CRDs: specific Kubernetes based declarations.

Kong Manager: the official Kong Gateway Admin GUI. A Kong Manager screen shot is shown below:

Data Plane Deployment

With the Control Plane available, it’s time to deploy the Kong Data Plane on the Worker Node of the EKS Anywhere Workload Cluster. Here’s the Helm command:

helm install kong kong/kong -n kong \

--set ingressController.enabled=false \

--set image.repository=kong/kong-gateway \

--set image.tag= \

--set env.database=off \

--set env.role=data_plane \

--set env.cluster_cert=/etc/secrets/kong-cluster-cert/tls.crt \

--set env.cluster_cert_key=/etc/secrets/kong-cluster-cert/tls.key \

--set env.lua_ssl_trusted_certificate=/etc/secrets/kong-cluster-cert/tls.crt \

--set env.cluster_control_plane=<Control_Plane_Cluster_LoadBalancer>:8005 \

--set env.cluster_telemetry_endpoint=<Control_Plane_ClusterTelemetry_LoadBalancer>:8006 \

--set proxy.enabled=true \

--set proxy.type=NodePort \

--set enterprise.enabled=true \

--set enterprise.portal.enabled=false \

--set enterprise.rbac.enabled=false \

--set enterprise.smtp.enabled=false \

--set manager.enabled=false \

--set portal.enabled=false \

--set portalapi.enabled=false \

--set env.status_listen= \

--set secretVolumes[0]=kong-cluster-cert \

--set enterprise.license_secret=kong-enterprise-license

Again, the most important settings are:

  • env.role=data_plane  to configure this Kong Gateway instance as a Data Plane
  • env.database=off , which unlike the Control Plane, does not require a database to store its metadata and instead gets all API and policy definition using the specific mTLS tunnel it builds with the Control Plane.
  • env.cluster_control_plane=<Control_Plane_Cluster_LoadBalancer>:8005  referring to the exposed Control Plane IP and port
  • env.cluster_telemetry_endpoint=<Control_Plane_ClusterTelemetry_LoadBalancer>:8006  referring to the second Control Plane IP and port
  • proxy.type=NodePort  to define how to expose the Data Plane to the API consumers.

Since the Data Plane is exposed as a NodePort service, an external Load Balancer, sitting in front of the EKS Anywhere Cluster, could be deployed. Check the documentation to see how to use Kube-VIP or MetalLB in the Amazon EKS Anywhere Cluster.

Checking the Kong Data Plane from Kong Control Plane

After deploying the Data Plane, it should be already connected to the Kong Control Plane. It can be checked by sending a REST Admin request to the /clustering/status endpoint exposed by the Control Plane. For example:

$ http <Control_Plane_LoadBalancer>:8001/clustering/status
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 178
Content-Type: application/json; charset=utf-8
Date: Fri, 10 Jun 2022 13:55:27 GMT
Deprecation: true
Server: kong/
X-Kong-Admin-Latency: 11
X-Kong-Admin-Request-ID: HBtaUpGJFMKSA5Mqaj0BUxT8yLsWa21I
vary: Origin

&nbsp;&nbsp;&nbsp;&nbsp;"9e3e77e4-1787-48c5-b891-e64d05ba2eb1": {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"config_hash": "6085b343870b77813c810834abf70216",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"hostname": "kong-dp-kong-69d9486f9d-464xf",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"ip": "",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"last_seen": 1624024517

The HTTPie result shows that the Data Plane has successfully connected to the Control Plane.

Defining a Service and a Route

With both Kong Control Plane and Data Plane running it’s time to create the first API. A Kong API is based on two constructs:

  • Kong Service: an entity representing an external upstream API or microservice.
  • Kong Route: exposes Kong Services to external consumers.

Create a Kong Service

The following Kong Service is based on the external and public HTTPbin service:

http <Control_Plane_LoadBalancer>:8001/services name=httpservice url='''

Create a Kong Route

The following Kong Route exposes the previously created Kong Service with the /httpbin  path:

http <Control_Plane_LoadBalancer>:8001/services/httpservice/routes name='httpbinroute' paths:='["/httpbin"]'

Consume the Kong Route

The Kong Control Plane is responsible for publishing any construct defined, including Kong Services and Routes, to the Kong Data Plane. So, both should be available for consumption:

$ http <DataPlane_PublicIP>:8000/httpbin/get
HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 434
Content-Type: application/json
Date: Sat, 10 Jun 2022 21:13:51 GMT
Server: gunicorn/19.9.0
Via: kong/
X-Kong-Proxy-Latency: 59
X-Kong-Upstream-Latency: 182

&nbsp;&nbsp;&nbsp;&nbsp;"args": {},
&nbsp;&nbsp;&nbsp;&nbsp;"headers": {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"Accept": "*/*",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"Accept-Encoding": "gzip, deflate",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"Host": "",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"User-Agent": "HTTPie/2.4.0",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"X-Amzn-Trace-Id": "Root=1-60a0398f-3b2c25473810111760cd655b",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"X-Forwarded-Host": "",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"X-Forwarded-Path": "/httpbin/get",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"X-Forwarded-Prefix": "/httpbin"
&nbsp;&nbsp;&nbsp;&nbsp;"origin": ",",
&nbsp;&nbsp;&nbsp;&nbsp;"url": ""


Kong Gateway Enterprise and Amazon EKS Anywhere make it easy to run services in hybrid deployments across multiple platforms, supporting on-prem and cloud workloads. You can learn more about products showcased in this blog through the official documentation: Amazon Elastic Kubernetes Services and Kong Gateway Enterprise.

Feel free to apply and experiment your API policies like caching with AWS ElastiCache for Redis, log processing with AWS Opensearch Services, OIDC-based authentication with AWS Cognito, canary, GraphQL integration and more with the extensive list of plugins provided by Kong Gateway Enterprise.

Share Post