Kong Konnect Data Plane Pod Autoscaling with HPA on Amazon EKS 1.29
In my previous post, we discussed how to take advantage of VPA to implement automatic vertical scaling for our Konnect Data Planes. In this post, we'll focus on HPA for horizontal Kubernetes Pods autoscaling.
HPA
VPA docs recommend not using VPA along with HPA on CPU or memory metrics at this moment. HPA should be configured with custom and external metrics instead.
Since we'll be running fundamental HPA use cases, let's delete the VPA policy with:
kubectl delete vpa kong-vpa -n kong
HPA Fundamentals
Horizontal Pod Autoscaler (HPA) is a standard API resource in Kubernetes that automatically updates a workload resource, for example a Deployment, to match a throughput demand.
The following diagram was taken from the official Kubernetes HPA documentation. Please check it out to learn more.
![](https://prd-mktg-konghq-com.imgix.net/images/2024/02/65c5000c-image1-6.png?auto=format&fit=max&w=2560)
HPA requires metrics provided by the Kubernetes Metrics Server, we have already installed. The Metrics Server collects resource metrics from the kubelets
in your cluster, and exposes those metrics through the Kubernetes API.
To have better control of the HPA environment, let's set our Konnect Data Plane deployment requesting more CPU and memory resources:
kubectl set resources deployment kong-kong -n kong --requests "cpu=300m,memory=300Mi" --limits "cpu=1500m,memory=3Gi"
Also, to see HPA in action, we are going to replace the existing NodeGroup with a smaller one, this time based on the t3.large
Instance Type with 2 vCPUs and 8GiB of memory:
eksctl delete nodegroup --cluster kong35-eks129-autoscaling --region us-west-1 --name nodegroup-kong
eksctl create nodegroup --cluster kong35-eks129-autoscaling --region us-west-1 \
--name nodegroup-kong \
--node-labels="nodegroupname=kong" \
--node-type t3.large \
--nodes 1 \
--nodes-min 1 --nodes-max 10 \
--max-pods-per-node 50
HPA Policy
The Kubernetes Autoscale command tells HPA how to proceed to instantiate new Pod replicas. The command tells that HPA should create new replicas of the Data Plane when the CPU usage reaches the 75% threshold. HPA should create up to 10 replicas of the Data Plane.
kubectl autoscale deployment kong-gateway -n kong --cpu-percent=75 --min=1 --max=10
You can use the HorizontalPodAutoscaler
CRD instead. Bear in mind this is a basic use case, there are many other scenarios addressed by HPA. Please, check the HPA documentation to learn more about it.
cat <<EOF | kubectl apply -f -
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: kong-hpa
namespace: kong
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: kong-kong
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 75
EOF
Check the HPA with. Since we haven't consumed the Data Plane yet, the HPA current usage is unknown.
% kubectl get hpa -n kong
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
kong-hpa Deployment/kong-kong <unknown>/75% 1 10 0 5s
Consume the Data Plane
Now, you are going to submit the Data Plane to a higher throughput in order to see HPA in action, spinning up new replicas of the Pod. Additionally, this is a long 20-minute run, so we should see how HPA performs in a scenario like this. First of all, make sure you delete any Fortio Pods you might have running.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: fortio
labels:
app: fortio
spec:
containers:
- name: fortio
image: fortio/fortio
args: ["load", "-c", "800", "-qps", "3000", "-t", "20m", "-allow-initial-errors", "http://kong-kong-proxy.kong.svc.cluster.local:80/route1/get"]
nodeSelector:
nodegroupname: fortio
EOF
After some minutes, you should see a new status:
% kubectl get hpa -n kong
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
kong-hpa Deployment/kong-kong 96%/75% 1 10 7 14m
If you check the running Pods you should see new replicas were started. However, there are two of them in the Pending
status.
% kubectl get pod -n kong
NAME READY STATUS RESTARTS AGE
kong-kong-789ffd6f86-bmgh8 0/1 Pending 0 8m23s
kong-kong-789ffd6f86-cjrrp 1/1 Running 0 9m23s
kong-kong-789ffd6f86-fcnts 1/1 Running 0 8m23s
kong-kong-789ffd6f86-k7cnm 1/1 Running 0 9m23s
kong-kong-789ffd6f86-sf65t 0/1 Pending 0 22s
kong-kong-789ffd6f86-tv96x 1/1 Running 0 15m
kong-kong-789ffd6f86-twzd8 1/1 Running 0 9m23s
Let's check one of them a bit closer. The condition message says there are no more CPUs available to be allocated, hence the Pod has not been scheduled.
% kubectl get pod kong-kong-789ffd6f86-bmgh8 -n kong -o json | jq ".status"
{
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2024-01-26T21:01:54Z",
"message": "0/3 nodes are available: 1 Insufficient cpu, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.",
"reason": "Unschedulable",
"status": "False",
"type": "PodScheduled"
}
],
"phase": "Pending",
"qosClass": "Burstable"
}
For more evidence, you can check the Nodes consumption themselves and see it has run out of resources:
% kubectl top node --selector='eks.amazonaws.com/nodegroup=nodegroup-kong'
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ip-192-168-11-188.us-west-1.compute.internal 1999m 103% 7080Mi 99%
The straightforward solution would be to create new Nodes for the Cluster. That's the main reason why we should use a cluster autoscaler mechanism alongside HPA.
Amazon EKS supports two autoscaling products:
- Standard Kubernetes Cluster Autoscaler
- Karpenter
Check out the third part of this series to see Kubernetes Cluster Autoscaler in action.