Kong Gateway Enterprise and Amazon EKS Anywhere Bare Metal
Power up application modernization and migration using Kong Gateway Enterprise and Amazon EKS Anywhere Bare Metal
One of the most critical requirements for an Application Modernization project is to support workloads running on multiple platforms. In fact, such projects naturally include in their transformation process migrating workloads approach using the hybrid model.
Another typical technical decision that commonly comes up is the adoption of Kubernetes as the main platform for the existing services and microservices originated by the modernization project.
Kubernetes has become a de facto standard as a new platform for developing modern applications. Among the vast collection of technological resources offered by Kubernetes, perhaps the most important is the provision of a standardized environment for the complete life cycle of an application. In other words: regardless of the context where the application will be installed, whether once again On Premises or Cloud Computing, there will be a distribution of Kubernetes available.
This is exactly the purpose of the Amazon EKS Anywhere (EKS-A) service: to provide the well known Amazon EKS technology across multiple environments.
Kong Gateway Enterprise Principles
The principles of the Kong Gateway Enterprise provide natively several capabilities which are totally aligned with the flexibility provided by EKS Anywhere. For example:
- Architectural freedom: Kong connects services across any platform including Linux-based OSes, Docker and any Kubernetes flavor like Amazon EKS and Amazon EKS Anywhere.
- Kubernetes native: Kong fully supports Kubernetes including all capabilities provided by the platform such as CRDs, HPA (“Horizontal Pod Autoscaler”), Cert-Manager, Knative, Helm Charts, Operators, etc.
- Hybrid and multi-cloud: providing a multi-platform engine is not enough, it has to support several platforms at the same time across any infrastructure and any deployment pattern such as hybrid or multi-cloud configurations.
In fact, a Kong Gateway hybrid deployment should consider the complete separation of the control plane (CP) and data plane (DP). In this context, the control plane, responsible for administration tasks, and the data plane, exclusively used by API consumers, would be running in completely separate environments.
In summary, the synergic combination of both technologies results in a powerful platform to run critical applications.
Amazon EKS Anywhere Bare Metal and Kong Gateway Enterprise
Amazon EKS Anywhere is based on the concept of providers. Each provider is capable of deploying EKS clusters in a specific environment. The first Amazon EKS Anywhere version, released in September 2021, supported Docker for local development clusters and VMware vSphere for production clusters.
With this new version, Amazon EKS Anywhere provides another deployment target to support bare metal.
Ultimately, a Kong Gateway Enterprise Hybrid Deployment on Amazon EKS and Amazon EKS Anywhere platforms is represented on the diagram below:
- Kong Control Plane runs on an EKS cluster in AWS Cloud. It is a stable environment used only by admins to create APIs, policies and API documentation.
- Kong Data Plane #1 runs on an on-prem EKS Anywhere cluster deployed on bare metal. It exposes the services and microservices deployed in all environments we may have, including application servers, legacy systems and other EKS Anywhere clusters.
- Kong Data Plane #2 runs on another EKS Anywhere Cluster, this time based on VMware vSphere. It plays similar role as Kong DP #1, exposing the local service and applications.
- Kong Data Plane #3 runs on AWS Cloud along with Kong Control Plane, but on a different AWS region. It supports the microservices and services that have been migrated from the on-prem environment or new microservices developed in cloud environments like ECS, EC2/ASG, etc.
- All Data Planes leverage AWS services like Cognito for OIDC-based authentication processes, Opensearch for log processing, etc. to implement policies to make sure the microservices or services are being safely consumed.
- The communication between the control plane and the data planes is based on mTLS tunnels. The control plane publishes APIs and policies across all existing data planes using a specific tunnel. On the other hand, using another tunnel, each data plane reports back the control plane with metrics regarding API request processing.
EKS Anywhere is build based on three main technologies:
- EKS Distro (Amazon's open source distribution for Kubernetes)
- Cluster API Project, a Kubernetes project focused on declarative, Kubernetes-style APIs to cluster creation, configuration and management.
- Tinkerbell, project to provision and manage bare metal. The specific CAPT (Cluster-API-Provider-Tinkerbell) provider is embedded in the EKS Anywhere infrastructure.
The following diagram describes an Amazon EKS Anywhere Bare Metal and Kong Gateway Enterprise Architecture Topology:
For a better diagram description we’re going to detail the two main processes to get a Hybrid Kong Gateway Enterprise deployment running the Control Plane on EKS and Data Plane on EKS Anywhere:
- EKS Anywhere Cluster Deployment
- Kong Gateway Enterprise Hybrid Deployment
EKS Anywhere Cluster Deployment
An EKS Anywhere cluster deployment process starts with an Admin Server node. This server has two main components:
- Tinkerbell Stack, responsible for the bare metal servers provisioning including remote iPXE remote boots and Operating System provisioning.
- EKS Anywhere Bootstrap Cluster. This is a temporary cluster, based on the Cluster API Bootstrap Cluster definition, responsible for provisioning the EKS Anywhere Workload Cluster (also based on the Cluster API definitions) running on both Bare Metal servers. The Workload Cluster comprehends the typical EKS Control Plane and Worker Node.
The EKS Anywhere CLI used in the Admin Server node abstracts all the underlying components. Its fundamental command takes as input two configuration files to create both Bootstrap and Workload Clusters. Here’s an example:
The two configuration files are:
ClusterSpec.yml : it describes how the EKS Anywhere Workload Cluster should be created by the Bootstrap Cluster.
Hardware.csv : it lists where the Bare Metal servers are, including their MAC Addresses, so the Tinkerbell Stack can use iPXE remote boot to provision them.
During the cluster creation process, the EKS Anywhere CLI creates a bootstrap Kind cluster on the Admin Server node and installs Cluster-API (CAPI) and Cluster-API-Provider-Tinkerbell (CAPT) components.
CAPI creates cluster node resources and CAPT maps hardware to nodes and powers up the corresponding bare metal servers. The bare metal servers iPXE boot and run the OS provisioning process by communicating with the Tinkerbell infrastructure. Both servers are provisioned with Ubuntu OS.
The Cluster management resources are transferred from bootstrap cluster to target EKS Anywhere workload cluster. Finally, the local bootstrap Kind cluster is deleted from the Admin Server node.
Kong Gateway Enterprise Hybrid Deployment
After executing the eksctl-anywhere create cluster command, the two bare metal servers are ready to receive the Kong Gateway deployment, more precisely the Kong Data Plane.
Control Plane Deployment
Before deploying the Kong Data Plane on the EKS Anywhere Worker Node, we need to take care of the Kong Control Plane. As described in the diagram, it will be running on a regular Amazon EKS Cluster in AWS Cloud.
The eksctl CLI can be used to get a fresh Amazon EKS Cluster available. For example:
And then, using the Kong Gateway Helm Charts, the Control Plane can be deployed. Here’s an example of a Helm command:
The most important settings for the Kong Control Plane are:
- env.role=control_plane to configure this Kong Gateway instance as the Control Plane
- cluster.type=LoadBalancer to expose the Control Plane with a load balancer, as the Data Planes refer to it to get the mTLS based encrypted tunnel in place.
- cluster.tls.servicePort=8005 as the API and policy publication mTLS tunnel port
- clustertelemetry.type=LoadBalancer to expose the telemetry endpoint to the Data Plane
- clustertelemetry.tls.servicePort=8006 as the tunnel port that the Data Plane will use to report back the Control Plane with API consumption related metrics.
Check the Kong Workshop available in AWS Portal for a more detailed description of the Kong Gateway Enterprise Hybrid Deployment.
In fact, Kong Gateway can be deployed not just with Helm Charts but with other usual Kubernetes mechanisms, including YAML declarations, Operators, etc. Likewise, combining these tools with IaC (Infrastructure as Code) technologies can be very helpful to automate the deployment processes. For example, AWS CloudFormation and AWS CDK (Cloud Development Kit) are great services to provision Kong Gateway across multiple platforms including not just EKS but ECS, EC2/ASG, etc. Check the Kong and CDK tutorial and CDK Construct Library for Kong for more information.
After submitting the Helm command the Control Plane should be available for the administrators. The Control Plane can configured with new APIs and Policies through some mechanisms:
- REST Admin APIs: all admin tasks can be executed with an extensive list of Admin APIs.
- decK: admins can manage Kong Gateway’s configuration in a declarative way.
- CRDs: specific Kubernetes based declarations.
Kong Manager: the official Kong Gateway Admin GUI. A Kong Manager screen shot is shown below:
Data Plane Deployment
With the Control Plane available, it’s time to deploy the Kong Data Plane on the Worker Node of the EKS Anywhere Workload Cluster. Here’s the Helm command:
Again, the most important settings are:
- env.role=data_plane to configure this Kong Gateway instance as a Data Plane
- env.database=off , which unlike the Control Plane, does not require a database to store its metadata and instead gets all API and policy definition using the specific mTLS tunnel it builds with the Control Plane.
- env.cluster_control_plane=<Control_Plane_Cluster_LoadBalancer>:8005 referring to the exposed Control Plane IP and port
- env.cluster_telemetry_endpoint=<Control_Plane_ClusterTelemetry_LoadBalancer>:8006 referring to the second Control Plane IP and port
- proxy.type=NodePort to define how to expose the Data Plane to the API consumers.
Since the Data Plane is exposed as a NodePort service, an external Load Balancer, sitting in front of the EKS Anywhere Cluster, could be deployed. Check the documentation to see how to use Kube-VIP or MetalLB in the Amazon EKS Anywhere Cluster.
Checking the Kong Data Plane from Kong Control Plane
After deploying the Data Plane, it should be already connected to the Kong Control Plane. It can be checked by sending a REST Admin request to the /clustering/status endpoint exposed by the Control Plane. For example:
The HTTPie result shows that the Data Plane has successfully connected to the Control Plane.
Defining a Service and a Route
With both Kong Control Plane and Data Plane running it’s time to create the first API. A Kong API is based on two constructs:
- Kong Service: an entity representing an external upstream API or microservice.
- Kong Route: exposes Kong Services to external consumers.
Create a Kong Service
The following Kong Service is based on the external and public HTTPbin service:
Create a Kong Route
The following Kong Route exposes the previously created Kong Service with the /httpbin path:
Consume the Kong Route
The Kong Control Plane is responsible for publishing any construct defined, including Kong Services and Routes, to the Kong Data Plane. So, both should be available for consumption:
Conclusion
Kong Gateway Enterprise and Amazon EKS Anywhere make it easy to run services in hybrid deployments across multiple platforms, supporting on-prem and cloud workloads. You can learn more about products showcased in this blog through the official documentation: Amazon Elastic Kubernetes Services and Kong Gateway Enterprise.
Feel free to apply and experiment your API policies like caching with AWS ElastiCache for Redis, log processing with AWS Opensearch Services, OIDC-based authentication with AWS Cognito, canary, GraphQL integration and more with the extensive list of plugins provided by Kong Gateway Enterprise.