Achieving Maximum API Platform Security With Kong
Before exposing your company's APIs, your highest priority should be to assure the API security, governance and reliability of that architecture. To do so, you'll need to use an API gateway as a single secure entry point for API consumers rather than allowing direct access to APIs. Kong Gateway can help manage the full lifecycle of services and APIs as well as secure and govern the access to those APIs within an API platform. Kong Gateway is the entry point on the internet/public network (otherwise known as north-south traffic).
The following is a simplified deployment architecture diagram showing what we'll configure to secure and expose our existing API platform using Kong Gateway (edge).
Adopting the microservices design paradigm means an API must have atomic/minimal functionality to avoid monoliths. A minimal API will promote reusability, reliability and scalability. Hence you will achieve some functionalities/requirements via API orchestration (direct client-to-microservice/service-to-service communication), which is the traffic within the API platform (east-west traffic). As the number of services/APIs grows, the complexity of the challenge to secure, govern and monitor the traffic between the APIs will grow; the solution to these problems is using service mesh.
This article will go through simple steps to first create a sandbox/demo API platform environment in the Kubernetes cluster, secured by Kong Gateway (to govern north-south). Next, we will secure the service-to-service traffic using Kong Mesh (to govern east-west) and enable zero trust for our API platform. The modified deployment architecture diagram below includes Kuma data planes and a Kuma control plane as our target state.
Set Up Your Kubernetes Cluster
We'll use minikube, a lightweight local Kubernetes, as our API platform for this demo. In addition, we are using Kong Ingress Controller for our Kubernetes Cluster.
Deploying Kong Ingress Controller
Run the following command to deploy Kong Ingress Controller (KIC) or refer to the Kong documentation for more deployment options:
kubectl create -f https://bit.ly/k4k8s
Note: You need to run the "minikube tunnel" command and leave it running in the background or a different terminal.
To verify the deployment, run the following command:
kubectl get pods -n kong
Expected results:
NAME READY STATUS RESTARTS AGE
ingress-kong-694fb8d8f-rgk55 2/2 Running 0 2m21s
namespace/kuma-demo created
deployment.apps/redis created
service/redis created
deployment.apps/demo-app created
service/demo-app created
Note: Make sure "minikube tunnel" is running before executing the following commands and verify that http://localhost:8001/ is accessible.
Next, we need to create a service and route in Kong Gateway for the Kuma demo API. The following command will create the service using the Kong Gateway Admin API, which uses kuma-demo-svc as upstream:
Secure the API Platform Using Kong Gateway
Deploying Kong Gateway
We'll use a preconfigured minimal Kubernetes YAML manifest to simplify the Kong Gateway deployment. Run the following command to deploy Kong Gateway:
kubectl create -f https://git.io/JDVWR -n kong-gw
Verify the deployment is successful by running the following command:
kubectl get pods -n kong-gw
Expected results:
NAME READY STATUS RESTARTS AGE
kong-enterprise-799894dfcd-gvpdv 1/1 Running 2 (3m30s ago) 4m11s
kong-migrations--1-zm2jz 0/1 Completed 0 4m10s
postgres-0 1/1 Running 0 4m10s
Route All Incoming Traffic to Kong Gateway
We need to route all incoming traffic from KIC to Kong Gateway (single entry point) to secure the APIs behind Kong Gateway. Thus, we need to create an ingress resource for Kubernetes to route the incoming traffic to the gateway's proxy data plane by running the following command:
cat <<EOF | kubectl apply -f - -n kong-gw
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: kong
namespace: kong-gw
name: gateway-ingress
namespace: kong-gw
spec:
rules:
- host: kong-proxy.local
http:
paths:
- backend:
service:
name: kong-proxy
port:
number: 8000
path: /
pathType: Prefix
EOF
Since our ingress resource routing rule is based on the hostname kong-proxy.local, we need to add the following line to our hosts file under /etc/hosts (Linux/OSX) or <Windows Home>System32driversetchosts (MS Windows):
127.0.0.1 kong-proxy.local
Verify your configuration by sending the following request using curl.
Note: We're using the kong-proxy.local hostname. Thus KIC will forward this request to Kong GW:
Curl -v http://kong-proxy.local:80/test
Expected HTTP response body, since no routes are configured:
< HTTP/1.1 404 Not Found
< Server: kong/2.6.0.0-enterprise-edition
{"message":"no Route matched with those values"}
Note: Server response header value also shows kong/2.6.0.0-enterprise-edition
Deploying Demo APIs
The following steps will deploy the Kuma counter demo, which consists of two services: demo-app and Redis. Demo-app will use (makes calls) Redis to store/retrieve its counter.
kubectl create -f https://git.io/JX6L0
Here's our expected output:
namespace/kuma-demo created
deployment.apps/redis created
service/redis created
deployment.apps/demo-app created
service/demo-app created
Note: Make sure "minikube tunnel" is running before executing the following commands and verify that http://localhost:8001/ is accessible.
Next, we need to create a service and route in Kong Gateway for the Kuma demo API. The following command will create the service using the Kong Gateway Admin API, which uses kuma-demo-svc as upstream:
curl -X POST 'http://localhost:8001/default/services'
--data "name=kuma-demo-svc"
--data "url=http://demo-app.kuma-demo.svc:5000"
The following command will create a route for the previously created service in Kong Gateway:
curl -X POST 'http://localhost:8001/default/services/kuma-demo-svc/routes'
--data "name=kuma-demo-rt"
--data "paths=/demo"
Here's our response sample:
{"counter":1,"zone":"local","err":null}
The following are the list of available resources that we can try using the curl command:
curl -X POST http://kong-proxy.local/demo/increment
curl -X GET http://kong-proxy.local/demo/counter
curl -X DELETE http://kong-proxy.local/demo/counter
Securing the North-South Traffic Using Kong Gateway
Now that all the traffic is directed to the Kong Gateway and the gateway is routing the traffic to our backend APIs, we can secure our Kuma demo API using Kong Gateway policies. In the following section, we'll enable the KeyAuth Plugin on the kuma-demo-rt route. This will ensure incoming traffic has valid/registered apiKey credentials, or it will be rejected.
Let's enable key auth on Kong Gateway to secure the APIs and register a consumer:
curl -X POST 'http://localhost:8001/default/routes/kuma-demo-rt/plugins'
--data "name=key-auth"
--data "config.key_names=apikey"
Let's register Consumer in Kong Gateway:
curl 'http://localhost:8001/default/consumers'
--data "username=test-consumer"
Let's register a key for the consumer:
curl 'http://localhost:8001/default/consumers/test-consumer/key-auth'
--data "key=123test"
Then we'll verify that API calls without the apiKey header are failing with HTTP 401 unauthorized, but any request with the ‘apiKey:123test’ header receives HTTP 200. For example, you can try the following request:
curl -v -X POST kong-proxy.local/demo/increment -H 'apiKey:123test'
Secure Service-to-Service Traffic Using Kong Mesh
Now we have our API platform with Sample APIs deployed, and it's secured by Kong Gateway. However, at this point, you can see that the demo-app can make direct calls to Redis (It bypasses the Kong Gateway), which is east-west traffic. In the next section, by deploying Kong Mesh and enabling zero trust security, we can take control of all services' internal communications by using Kong Mesh traffic permissions.
Deploying Kong Mesh With Zero Trust Enabled
Run the following command to download and extract kong mesh into the local directory:
curl -L https://docs.konghq.com/mesh/installer.sh | sh -
Navigate to the <local folder>/kong-mesh-x.x.x/bin folder and next, run the following command to deploy Kong Mesh to your local Kubernetes cluster:
./kumactl install control-plane | kubectl apply -f -
Verify that Kong Mesh deployed successfully:
kubectl get po -n kong-mesh-system
kong-mesh-control-plane-7bc876c679-m5ltf 1/1 Running 0 10m
From now on, all new deployments will have Kong Mesh sidecar containers as long as their namespaces have "kuma.io/sidecar-injection: enabled" annotation. You may notice that our previously generated namespaces already have that annotation. Yet, since Kong Mesh wasn't deployed before them, they don't have the sidecar container.
Here's our scenario: We want to secure the existing API platform. All we have to do is to make sure that all namespaces already have "kuma.io/sidecar-injection: enabled" annotation and then restart those pods within those namespaces.
Thus we can delete pods for Kong Gateway, Kong Ingress and demo APIs and allow the deployment's replica set to redeploy them by using the following commands:
kubectl get pods -n kong --no-headers=true | awk '/ingress-kong/{print $1}' | xargs kubectl delete -n kong pod
kubectl get pods -n kong-gw --no-headers=true | awk '/kong-enterprise/{print $1}' | xargs kubectl delete -n kong-gw pod
kubectl get pods -n kuma-demo --no-headers=true | awk '{print $1}' | xargs kubectl delete -n kuma-demo pod
Now we can confirm that all containers have Kong Mesh sidecar containers by running the following command:
kubectl get -n kuma-demo pod -o jsonpath="{.items[*].spec.containers[*].image}"
You should see containers that are using the "docker.io/kong/kuma-dp:x.x.x" base image.
On a side note for Kong Gateway, there is a pre-existing annotation (kuma.io/gateway: enabled) to register it as a gateway in Kong Mesh and operate in Gateway Mode. for more information, please refer to the documentation.
To enable zero trust in Kong Mesh, first, we need to enable mTLS for our mesh instance and remove the default traffic permission that allows all communications.
Run the following command to enable mTLS for your mesh:
cat <<EOF | kubectl apply -f -
apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
name: default
spec:
mtls:
enabledBackend: ca-1
backends:
- name: ca-1
type: builtin
EOF
Note: Since default traffic permission is still deployed, you can still access services and Kong Gateway without any issues. The default traffic permission ensures that enabling the mTLS won't block your APIs connectivity immediately.
Run the following command to delete default traffic permission:
kubectl delete trafficpermissions.kuma.io allow-all-default
Verify that when you hit the demo service, you get ‘HTTP 503 Service Unavailable':
curl -v -X POST kong-proxy.local/demo/increment -H 'apiKey:123test'
Expected response:
< HTTP/1.1 503 Service Unavailable
< server: envoy
{upstream connect error or disconnect/reset before headers. reset reason: connection termination}
This response means our request is reaching Kong Gateway. However, since no traffic permissions are defined, Kong Gateway can not send the request to the upstream service, which is the demo-app.
Now add the following traffic permissions to allow Kong Gateway to forward requests to any service within the mesh. This is safe since our gateway is protecting the north-south traffic:
cat <<EOF | kubectl apply -f -
apiVersion: v1
apiVersion: kuma.io/v1alpha1
kind: TrafficPermission
mesh: default
metadata:
annotations:
name: gateway-to-any
spec:
destinations:
- match:
kuma.io/service: '*'
sources:
- match:
kuma.io/service: kong-admin_kong-gw_svc_8001
EOF
If we send a request again, we'll receive the following response:
curl -v -X POST kong-proxy.local/demo/increment -H 'apiKey:123test'
Sample response:
< HTTP/1.1 200 OK
{"err":true}
This response indicates that we could hit the demo-app service, but demo-app could not hit the Redis service, which makes sense as we have no traffic permission that allows demo-app to communicate with Redis. Thus, we can define the following traffic permission, which allows traffic from demo-app to Redis service:
cat <<EOF | kubectl apply -f -
apiVersion: v1
apiVersion: kuma.io/v1alpha1
kind: TrafficPermission
mesh: default
metadata:
annotations:
name: demo-frontend-to-backend
spec:
destinations:
- match:
kuma.io/service: redis_kuma-demo_svc_6379
sources:
- match:
kuma.io/service: demo-app_kuma-demo_svc_5000
EOF
Now we should hit the service and receive the expected response.
Conclusion
This article provided a series of steps to demonstrate how to secure our API platform using Kong products for both north-south and east-west traffic. First, we created our simple API platform in Kubernetes and secured it using Kong Gateway as a single point of entry to our platform for the North-South traffic. Next, we used Kong Mesh to secure and govern all service-to-service (East-West) traffic within the API platform. We enabled zero trust to ensure that no unauthorized traffic could reach any of the services within our platform.
As a next step, to increase observability within your service mesh, you can follow the steps provided in the "Automate Service Mesh Observability With Kuma" article. In addition, you can try to explore and apply other policies to your mesh. For more information, you may refer to Kuma Policies and Kong Mesh for enterprise features and policies.
Developer agility meets compliance and security. Discover how Kong can help you become an API-first company.
![](https://prd-mktg-konghq-com.imgix.net/images/2023/12/657b80e8-image-container2.png?auto=format&fit=max&w=2560)