• Explore the unified API Platform
        • BUILD APIs
        • Kong Insomnia
        • API Design
        • API Mocking
        • API Testing & Debugging
        • MCP Client
        • RUN APIs
        • API Gateway
        • Context Mesh
        • AI Gateway
        • Event Gateway
        • Kubernetes Operator
        • Service Mesh
        • Ingress Controller
        • Runtime Management
        • DISCOVER APIs
        • Developer Portal
        • Service Catalog
        • MCP Registry
        • GOVERN APIs
        • Metering & Billing
        • Analytics
        • APIOps & Automation
        • API Observability
        • Why Kong?
      • CLOUD
      • Cloud API Gateways
      • Need a self-hosted or hybrid option?
      • COMPARE
      • Considering AI Gateway alternatives?
      • Kong vs. Postman
      • Kong vs. MuleSoft
      • Kong vs. Apigee
      • Kong vs. IBM
      • GET STARTED
      • Sign Up for Kong Konnect
      • Documentation
  • Agents
      • FOR PLATFORM TEAMS
      • Developer Platform
      • Kubernetes & Microservices
      • Observability
      • Service Mesh Connectivity
      • Kafka Event Streaming
      • FOR EXECUTIVES
      • AI Connectivity
      • Open Banking
      • Legacy Migration
      • Platform Cost Reduction
      • Kafka Cost Optimization
      • API Monetization
      • AI Monetization
      • AI FinOps
      • FOR AI TEAMS
      • AI Cost Control
      • AI Governance
      • AI Integration
      • AI Security
      • Agentic Infrastructure
      • MCP Production
      • MCP Traffic Gateway
      • FOR DEVELOPERS
      • Mobile App API Development
      • GenAI App Development
      • API Gateway for Istio
      • Decentralized Load Balancing
      • BY INDUSTRY
      • Financial Services
      • Healthcare
      • Higher Education
      • Insurance
      • Manufacturing
      • Retail
      • Software & Technology
      • Transportation
      • See all Solutions
      • DOCUMENTATION
      • Kong Konnect
      • Kong Gateway
      • Kong Mesh
      • Kong AI Gateway
      • Kong Insomnia
      • Plugin Hub
      • EXPLORE
      • Blog
      • Learning Center
      • eBooks
      • Reports
      • Demos
      • Customer Stories
      • Videos
      • EVENTS
      • AI + API Summit
      • Webinars
      • User Calls
      • Workshops
      • Meetups
      • See All Events
      • FOR DEVELOPERS
      • Get Started
      • Community
      • Certification
      • Training
      • COMPANY
      • About Us
      • Why Kong?
      • We're Hiring!
      • Press Room
      • Investors
      • Contact Us
      • PARTNER
      • Kong Partner Program
      • SECURITY
      • Trust and Compliance
      • SUPPORT
      • Enterprise Support Portal
      • Professional Services
      • Documentation
      • Press Releases

        Kong Names Bruce Felt as Chief Financial Officer

        Read More
  • Pricing
  • Login
  • Get a Demo
  • Start for Free
Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Prometheus and Grafana APM on Kubernetes Ingress
Engineering
September 21, 2021
6 min read

Prometheus and Grafana APM on Kubernetes Ingress

Joseph Caudle

While monitoring is an important part of any robust application deployment, it can also seem overwhelming to get a full application performance monitoring (APM) stack deployed. In this post, we'll see how operating a Kubernetes environment using the open-source Kong Ingress Controller can simplify this seemingly daunting task! You’ll learn how to use Prometheus and Grafana on Kubernetes Ingress to simplify APM setup.

The APM stack we're going to deploy will be based on Prometheus and Grafana, and it'll make use of Kong's Grafana dashboard. Together, the solution will give us access to many important system details about Kong and the services connected to it—right out of the box. We'll simulate traffic going through the deployed stack and then observe the impact of that traffic using our monitoring tools.

It's important to note that this implementation will be more of a demonstration than an instruction manual on deploying to a production environment. We're going to use kind to create our Kubernetes cluster, making it easy to follow along on your local machine. From here, you'll have a foundation that will translate easily to your production-ready business applications.

​​

Getting Going With a Kubernetes Cluster on kind

Since we're using kind, there are only a few things we need to get started. Follow the installation instructions for your platform. We'll also use the kubectl command-line tool and Helm, so make sure you have up-to-date versions of all tools to follow along.

% kind version
kind v0.11.1 go1.16.5 darwin/amd64

% kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-21T23:01:33Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}

% helm version
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.6"}

First, we'll create a named Kubernetes cluster:

% kind create cluster --name kong
Creating cluster "kong" ...
 ✓ Ensuring node image (kindest/node:v1.21.1) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kong"
You can now use your cluster with:

kubectl cluster-info --context kind-kong

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

Installing Prometheus and Grafana

Now, we're ready to get our monitoring stack deployed. It may seem a little strange to start with the monitoring rather than our Kubernetes service. Still, this way, we will know immediately if Kong is connected properly to Prometheus and Grafana. We'll start by adding repos for Prometheus and Grafana:

% helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories

% helm repo add grafana https://grafana.github.io/helm-charts
"grafana" has been added to your repositories

If you've already installed these repositories, just make sure to run helm repo update to ensure you have up-to-date Helm charts.

We'll start with Prometheus configuration. The first step is to configure the scrape interval for Prometheus. We want our system to check for metrics every 10 seconds. We can do that with the following YAML:

prometheus.yaml
server:
 global:
   scrape_interval: 10s

Put that file in your working directory, and let's get helming!

% helm install prometheus prometheus-community/prometheus --namespace monitoring --create-namespace --values prometheus.yaml --version 14.6.0

That should display a message saying, "For more information on running Prometheus, visit: https://prometheus.io/." Notice that we created a new namespace for cluster monitoring. This will allow us to keep things clean in our K8s cluster.

While our Prometheus installation is configuring, we'll set up Grafana as well. Let's start with this YAML:

grafana.yaml
persistence:
  enabled: true  # enable persistence using Persistent Volumes
datasources:
  datasources.yaml:
    apiVersion: 1
    datasources:  # configure Grafana to read metrics from Prometheus
      - name: Prometheus
        type: prometheus
        url: http://prometheus-server # Since Prometheus is deployed in
        access: proxy    # the same namespace, this resolves
                         # to the Prometheus Server we just installed
        isDefault: true  # The default data source is Prometheus
dashboardProviders:
  dashboardproviders.yaml:
    apiVersion: 1
    providers:
      - name: 'default' # Configure a dashboard provider file to
        orgId: 1        # put the Kong dashboard into.
        folder: ''
        type: file
        disableDeletion: false
        editable: true
        options:
          path: /var/lib/grafana/dashboards/default
dashboards:
 default:
   kong-dash:
     gnetId: 7424  # Install the following Grafana dashboard in the
     revision: 7   # instance: https://grafana.com/dashboards/7424
     datasource: Prometheus

A lot is going on in this configuration file, but the comments should give a sufficient explanation. Essentially, when Grafana starts up, we want the Prometheus connection and the Kong dashboard already set up, too, rather than having to go through those steps manually. This file gets us that setup.

The Helm command is very similar to the one for Prometheus:

% helm install grafana grafana/grafana --namespace monitoring --values grafana.yaml --version 6.16.2

After running this command, you'll see instructions for getting the admin password for your installation. We'll come back to that soon.

Installing and Configuring Kong Gateway

Now that our monitoring stack is getting going, we can configure Kong to do the heavy lifting in our application. First, we'll make sure to add the Kong Helm chart repo:

% helm repo add kong https://charts.konghq.com
"kong" has been added to your repositories

You'll also need to create a YAML for the annotations we want to add to our Kong pod:

kong.yaml
podAnnotations:
   prometheus.io/scrape: "true" # Ask Prometheus to scrape the
   prometheus.io/port: "8100"   # Kong pods for metrics

With the YAML in place, let's get Kong installed via Helm:

% helm install mykong kong/kong --namespace kong --create-namespace  --values kong.yaml --set ingressController.installCRDs=false --version 2.3.0

Once you've gotten that kicked off, create the following YAML file to connect Kong and Prometheus:

prometheus-kong-plugin.yaml
apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
 name: prometheus
 annotations:
   kubernetes.io/ingress.class: kong
 labels:
   global: "true"
plugin: prometheus

Apply that configuration with kubectl:

% kubectl apply -f prometheus-kong-plugin.yaml

Final Server Configuration

With our stack fully built out in our kind cluster, we need to see what would happen if we used this sort of stack in production. In particular, we need to set up access to these services from a browser. We'll do that with a few port forwarding rules. In a new terminal session, we use kubectl to look up each piece we've created so far. Then, we set up a rule to visit each service as a localhost application in our browser of choice.

% POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")


% kubectl --namespace monitoring port-forward $POD_NAME 9090 &


% POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}")


% kubectl --namespace monitoring port-forward $POD_NAME 3000 &


% POD_NAME=$(kubectl get pods --namespace kong -o jsonpath="{.items[0].metadata.name}")


% kubectl --namespace kong port-forward $POD_NAME 8000 &

Without closing the terminal session, you should now be able to access different services based on the port:

  • localhost:9090 for your Prometheus installation
  • localhost:3000 for your Grafana installation
  • localhost:8000 for your Kong installation

One thing you'll notice is the login screen for Grafana, where you'll need a password. To obtain that password, use the following command in your original terminal session:

% kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

This should give you a 40-character alphanumeric password that you can copy and paste into your browser at http://localhost:3000/login as the password, using "admin" as the username. If things are going according to plan, you'll be able to click on the "Kong (official)" Grafana dashboard link and see the Grafana UI:

Grafana Dashboard

If your dashboard doesn't say "No data" all over the place, you're in good shape. Otherwise, you might want to review the above steps again to ensure that everything is configured correctly.

Let's Demo This Dashboard!

Finally, let's get our ingress point set up and expose a few sample services. If you were really trying to use this monitoring stack, this is where the deployment of your application would go. For now, we're going to make a few dummy services with the following endpoints:

  • /billing
  • /invoice
  • /comments

We'll use httpbin, which will allow us to simulate a fully functioning service with response codes as we ask for them. We're doing this so we can quickly generate some traffic to our system. That way, things in the dashboard will look like they would if you deployed a real production system to your Kong instance.

Create the following two YAML files:

services.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: http-svc
spec:
  replicas: 1
  selector:
    matchLabels:
      app: http-svc
  template:
    metadata:
      labels:
        app: http-svc
    spec:
      containers:
      - name: http-svc
        image: docker.io/kennethreitz/httpbin
        ports:
        - containerPort: 80
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP

---

apiVersion: v1
kind: Service
metadata:
  name: billing
  labels:
    app: billing
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  selector:
    app: http-svc
---

apiVersion: v1
kind: Service
metadata:
  name: invoice
  labels:
    app: invoice
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  selector:
    app: http-svc
---

apiVersion: v1
kind: Service
metadata:
  name: comments
  labels:
    app: comments
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  selector:
    app: http-svc


ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sample-ingresses
  annotations:
    konghq.com/strip-path: "true"
    kubernetes.io/ingress.class: kong
spec:
  rules:
  - http:
     paths:
     - path: /billing
       pathType: Prefix
       backend:
         service:
           name: billing
           port:
             number: 80
     - path: /comments
       pathType: Prefix
       backend:
         service:
           name: comments
           port:
             number: 80
     - path: /invoice
       pathType: Prefix
       backend:
         service:
           name: invoice
           port:
             number: 80

Apply these configurations with kubectl:

% kubectl apply -f services.yaml
deployment.apps/http-svc created
service/billing created
service/invoice created
service/comments created
% kubectl apply -f ingress.yaml
ingress.networking.k8s.io/sample-ingresses created

After a few minutes, those pods should be available, and you can start sending traffic to your services:

% while true;
do
 curl http://localhost:8000/billing/status/200
 curl http://localhost:8000/billing/status/501
 curl http://localhost:8000/invoice/status/201
 curl http://localhost:8000/invoice/status/404
 curl http://localhost:8000/comments/status/200
 curl http://localhost:8000/comments/status/200
 sleep 0.01
done

After letting your traffic simulation run for just a few minutes, return to your Grafana instance in your browser (http://localhost:3000) and zoom in on the last few minutes of activity. You should see lots of beautiful visualizations ready for your analysis, with just a little bit of Kubernetes know-how.

request-rate

latencies

bandwidth

Conclusion

If you've been following along, now you're free to play around with all the out-of-the-box monitoring solutions. You can even introduce new Prometheus metrics in your instance or create Grafana dashboards of your own. Since this is a custom installation of these powerful tools, there is no limit to what you can accomplish with your new APM stack.

When you finish the experiments, cleanup is simple:

  • Give your while loop a Ctrl+C to terminate it.
  • Shut down your port forwarding processes.
  • Run kind delete cluster –name kong to terminate your cluster.

Once you've finished this Prometheus and Grafana on Kubernetes Ingress tutorial, you may find these other Kubernetes tutorials helpful:

  • Configuring a Kubernetes Application on Kong Konnect
  • Kubernetes Ingress gRPC Example With a Dune Quote Service
  • Managing Docker Apps With Kubernetes Ingress Controller

Have questions or want to stay in touch with the Kong community? Join us wherever you hang out:
🌎 Join the Kong Community🍻 Join our Meetups💯 Apply to become a Kong Champion📺 Subscribe on YouTube🐦 Follow us on Twitter⭐ Star us on GitHub❓ ️Ask and answer questions on Kong Nation

KubernetesIngress

More on this topic

Videos

Kong Builders - March 11 - Kubernetes Ingress Controller: Expose TCP services with Kong

Videos

Supercharge Kubernetes Ingress with Kong Ingress Controller

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
KubernetesIngress
Joseph Caudle

Recommended posts

Gateway API vs Ingress: The Future of Kubernetes Networking

EngineeringJanuary 31, 2024

As Kubernetes has become the de facto orchestration platform for deploying cloud native applications , networking and traffic management have emerged as pivotal challenges when managing access to services and infrastructure. The core Kubernetes Ing

Peter Barnard

Kubernetes Ingress gRPC Example With a Dune Quote Service

EngineeringJuly 30, 2021

APIs come in all different shapes and forms. In this tutorial, I'll show you a K8s Ingress gRPC example. I’ll explain how to deploy a gRPC service to Kubernetes and provide external access to the service using Kong's Kubernetes Ingress Controller.

Viktor Gamov

Managing Docker Apps With Kubernetes Ingress Controller

EngineeringJuly 6, 2021

Think back to when your development team made the switch to Dockerized containers. What was once an application requiring multiple services on virtual machines transitioned to an application consisting of multiple, tidy Docker containers. While the

Alvin Lee

Using Kong Kubernetes Ingress Controller as an API Gateway

EngineeringJune 16, 2021

In this first section, I'll provide a quick overview of the business case and the tools you can use to create a Kubernetes ingress API gateway. If you're already familiar, you could skip ahead to the tutorial section or watch the video at the bott

Viktor Gamov

Farewell Ingress NGINX: Explore a Better Path Forward with Kong

EngineeringNovember 14, 2025

"To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee are announcing the upcoming retirement of Ingress NGINX . Best-effort maintenance will continue until March 2026. Afterward, there w

Justin Davies

What's the Difference: Kubernetes Controllers vs Operators?

Learning CenterMarch 21, 2024

Kubernetes, or K8s, is one of the most powerful open source container orchestration systems — especially for its automatic implementation of a desired state. In other words, as an admin, you get to specify how you want your application and cluster t

Peter Barnard

What’s New in Kong Ingress Controller 3.1?

Product ReleasesFebruary 9, 2024

Kong Ingress Controller 3.1 provides brand-new capabilities for keeping your secrets secure. We’ve introduced new KongVault and KongLicense CRDs, and added a way to keep sensitive information in your cluster when using KIC in Konnect. Finally, t

Michael Heap

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026