See what makes Kong the fastest, most-adopted API gateway
Check out the latest Kong feature releases and updates
Single platform for SaaS end-to-end connectivity
Enterprise service mesh based on Kuma and Envoy
Collaborative API design platform
How to Scale High-Performance APIs and Microservices
Call for speakers & sponsors, Kong API Summit 2023!
7 MIN READ
While monitoring is an important part of any robust application deployment, it can also seem overwhelming to get a full application performance monitoring (APM) stack deployed. In this post, we’ll see how operating a Kubernetes environment using the open-source Kong Ingress Controller can simplify this seemingly daunting task! You’ll learn how to use Prometheus and Grafana on Kubernetes Ingress to simplify APM setup.
The APM stack we’re going to deploy will be based on Prometheus and Grafana, and it’ll make use of Kong’s Grafana dashboard. Together, the solution will give us access to many important system details about Kong and the services connected to it—right out of the box. We’ll simulate traffic going through the deployed stack and then observe the impact of that traffic using our monitoring tools.
It’s important to note that this implementation will be more of a demonstration than an instruction manual on deploying to a production environment. We’re going to use kind to create our Kubernetes cluster, making it easy to follow along on your local machine. From here, you’ll have a foundation that will translate easily to your production-ready business applications.
Since we’re using kind, there are only a few things we need to get started. Follow the installation instructions for your platform. We’ll also use the kubectl command-line tool and Helm, so make sure you have up-to-date versions of all tools to follow along.
% kind version kind v0.11.1 go1.16.5 darwin/amd64 % kubectl version Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-21T23:01:33Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"} % helm version version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.6"}
First, we’ll create a named Kubernetes cluster:
% kind create cluster --name kong Creating cluster "kong" ... ✓ Ensuring node image (kindest/node:v1.21.1) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kong" You can now use your cluster with: kubectl cluster-info --context kind-kong Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
Now, we’re ready to get our monitoring stack deployed. It may seem a little strange to start with the monitoring rather than our Kubernetes service. Still, this way, we will know immediately if Kong is connected properly to Prometheus and Grafana. We’ll start by adding repos for Prometheus and Grafana:
% helm repo add prometheus-community https://prometheus-community.github.io/helm-charts "prometheus-community" has been added to your repositories % helm repo add grafana https://grafana.github.io/helm-charts "grafana" has been added to your repositories
If you’ve already installed these repositories, just make sure to run helm repo update to ensure you have up-to-date Helm charts.
We’ll start with Prometheus configuration. The first step is to configure the scrape interval for Prometheus. We want our system to check for metrics every 10 seconds. We can do that with the following YAML:
prometheus.yaml server: global: scrape_interval: 10s
Put that file in your working directory, and let’s get helming!
% helm install prometheus prometheus-community/prometheus --namespace monitoring --create-namespace --values prometheus.yaml --version 14.6.0
That should display a message saying, “For more information on running Prometheus, visit: https://prometheus.io/.” Notice that we created a new namespace for cluster monitoring. This will allow us to keep things clean in our K8s cluster.
While our Prometheus installation is configuring, we’ll set up Grafana as well. Let’s start with this YAML:
grafana.yaml persistence: enabled: true # enable persistence using Persistent Volumes datasources: datasources.yaml: apiVersion: 1 datasources: # configure Grafana to read metrics from Prometheus - name: Prometheus type: prometheus url: http://prometheus-server # Since Prometheus is deployed in access: proxy # the same namespace, this resolves # to the Prometheus Server we just installed isDefault: true # The default data source is Prometheus dashboardProviders: dashboardproviders.yaml: apiVersion: 1 providers: - name: 'default' # Configure a dashboard provider file to orgId: 1 # put the Kong dashboard into. folder: '' type: file disableDeletion: false editable: true options: path: /var/lib/grafana/dashboards/default dashboards: default: kong-dash: gnetId: 7424 # Install the following Grafana dashboard in the revision: 7 # instance: https://grafana.com/dashboards/7424 datasource: Prometheus
A lot is going on in this configuration file, but the comments should give a sufficient explanation. Essentially, when Grafana starts up, we want the Prometheus connection and the Kong dashboard already set up, too, rather than having to go through those steps manually. This file gets us that setup.
The Helm command is very similar to the one for Prometheus:
% helm install grafana grafana/grafana --namespace monitoring --values grafana.yaml --version 6.16.2
After running this command, you’ll see instructions for getting the admin password for your installation. We’ll come back to that soon.
Now that our monitoring stack is getting going, we can configure Kong to do the heavy lifting in our application. First, we’ll make sure to add the Kong Helm chart repo:
% helm repo add kong https://charts.konghq.com "kong" has been added to your repositories
You’ll also need to create a YAML for the annotations we want to add to our Kong pod:
kong.yaml podAnnotations: prometheus.io/scrape: "true" # Ask Prometheus to scrape the prometheus.io/port: "8100" # Kong pods for metrics
With the YAML in place, let’s get Kong installed via Helm:
% helm install mykong kong/kong --namespace kong --create-namespace --values kong.yaml --set ingressController.installCRDs=false --version 2.3.0
Once you’ve gotten that kicked off, create the following YAML file to connect Kong and Prometheus:
prometheus-kong-plugin.yaml apiVersion: configuration.konghq.com/v1 kind: KongClusterPlugin metadata: name: prometheus annotations: kubernetes.io/ingress.class: kong labels: global: "true" plugin: prometheus
Apply that configuration with kubectl:
% kubectl apply -f prometheus-kong-plugin.yaml
With our stack fully built out in our kind cluster, we need to see what would happen if we used this sort of stack in production. In particular, we need to set up access to these services from a browser. We’ll do that with a few port forwarding rules. In a new terminal session, we use kubectl to look up each piece we’ve created so far. Then, we set up a rule to visit each service as a localhost application in our browser of choice.
% POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") % kubectl --namespace monitoring port-forward $POD_NAME 9090 & % POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}") % kubectl --namespace monitoring port-forward $POD_NAME 3000 & % POD_NAME=$(kubectl get pods --namespace kong -o jsonpath="{.items[0].metadata.name}") % kubectl --namespace kong port-forward $POD_NAME 8000 &
Without closing the terminal session, you should now be able to access different services based on the port:
One thing you’ll notice is the login screen for Grafana, where you’ll need a password. To obtain that password, use the following command in your original terminal session:
% kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
This should give you a 40-character alphanumeric password that you can copy and paste into your browser at http://localhost:3000/login as the password, using “admin” as the username. If things are going according to plan, you’ll be able to click on the “Kong (official)” Grafana dashboard link and see the Grafana UI:
If your dashboard doesn’t say “No data” all over the place, you’re in good shape. Otherwise, you might want to review the above steps again to ensure that everything is configured correctly.
Finally, let’s get our ingress point set up and expose a few sample services. If you were really trying to use this monitoring stack, this is where the deployment of your application would go. For now, we’re going to make a few dummy services with the following endpoints:
We’ll use httpbin, which will allow us to simulate a fully functioning service with response codes as we ask for them. We’re doing this so we can quickly generate some traffic to our system. That way, things in the dashboard will look like they would if you deployed a real production system to your Kong instance.
Create the following two YAML files:
services.yaml apiVersion: apps/v1 kind: Deployment metadata: name: http-svc spec: replicas: 1 selector: matchLabels: app: http-svc template: metadata: labels: app: http-svc spec: containers: - name: http-svc image: docker.io/kennethreitz/httpbin ports: - containerPort: 80 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP --- apiVersion: v1 kind: Service metadata: name: billing labels: app: billing spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: http-svc --- apiVersion: v1 kind: Service metadata: name: invoice labels: app: invoice spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: http-svc --- apiVersion: v1 kind: Service metadata: name: comments labels: app: comments spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: http-svc ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingresses annotations: konghq.com/strip-path: "true" kubernetes.io/ingress.class: kong spec: rules: - http: paths: - path: /billing pathType: Prefix backend: service: name: billing port: number: 80 - path: /comments pathType: Prefix backend: service: name: comments port: number: 80 - path: /invoice pathType: Prefix backend: service: name: invoice port: number: 80
Apply these configurations with kubectl:
% kubectl apply -f services.yaml deployment.apps/http-svc created service/billing created service/invoice created service/comments created % kubectl apply -f ingress.yaml ingress.networking.k8s.io/sample-ingresses created
After a few minutes, those pods should be available, and you can start sending traffic to your services:
% while true; do curl http://localhost:8000/billing/status/200 curl http://localhost:8000/billing/status/501 curl http://localhost:8000/invoice/status/201 curl http://localhost:8000/invoice/status/404 curl http://localhost:8000/comments/status/200 curl http://localhost:8000/comments/status/200 sleep 0.01 done
After letting your traffic simulation run for just a few minutes, return to your Grafana instance in your browser (http://localhost:3000) and zoom in on the last few minutes of activity. You should see lots of beautiful visualizations ready for your analysis, with just a little bit of Kubernetes know-how.
If you’ve been following along, now you’re free to play around with all the out-of-the-box monitoring solutions. You can even introduce new Prometheus metrics in your instance or create Grafana dashboards of your own. Since this is a custom installation of these powerful tools, there is no limit to what you can accomplish with your new APM stack.
When you finish the experiments, cleanup is simple:
Once you’ve finished this Prometheus and Grafana on Kubernetes Ingress tutorial, you may find these other Kubernetes tutorials helpful:
Have questions or want to stay in touch with the Kong community? Join us wherever you hang out: 🌎 Join the Kong Community 🍻 Join our Meetups 💯 Apply to become a Kong Champion 📺 Subscribe on YouTube 🐦 Follow us on Twitter ⭐ Star us on GitHub ❓ ️Ask and answer questions on Kong Nation
Share Post
Learn how to make your API strategy a competitive advantage.