The more services you have running across different clouds and [Kubernetes](https://konghq.com/blog/learning-center/what-is-kubernetes)Kubernetes clusters, the harder it is to ensure that you have a central place to collect service mesh observability metrics. That's one of the reasons we created [Kuma](https://kuma.io)Kuma, an open source control plane for service mesh. In this tutorial, I'll show you how to set up and leverage the Traffic Metrics and Traffic Trace policies that Kuma provides out of the box.
Once you have your Prometheus and Grafana infrastructure up and running, you should update your default service mesh tool to enable automatic metrics collection by exposing the built-in Prometheus side. You'll need to make sure that [mutualTLS](https://kuma.io/docs/1.1.1/policies/mutual-tls)mutualTLS is enabled first.
Next, expose Grafana so that you can look at the default dashboards that Kuma provides. To do this, port forward Grafana from the Kuma metrics namespace.
In Grafana, there are four dashboards that you can visualize out of the box.
In my example, I can see all the network traffic from my sample application to Redis, including requests.
I can visualize the actual control plane metrics to determine the overall performance of the control plane. This information will be helpful as you scale your service mesh technology. It'll help you determine if your control plane is experiencing a bottleneck situation or not.
What I did above is a little bit of an anti-pattern. I should not be consuming applications by port forwarding the sample application. Instead, I should use an ingress, like [Kong Ingress Controller](https://konghq.com/solutions/kubernetes-ingress)Kong Ingress Controller. Kuma is universal on both the data plane and the control plane. That means you can automatically deploy a sidecar proxy in Kubernetes with automatic injection.
To install Kong Ingress Controller inside my Kubernetes cluster, I'll open a new terminal. There should be a new Kong namespace.
To include this as part of the service mesh, you should annotate the Kong namespace with the Kuma sidecar injection. When Kuma sees this annotation on a namespace, it knows that it must inject the sidecar proxy to any service running here into that namespace.
Lastly, retrigger a deployment of Kong to inject the sidecar. You should see the sidecar showing up in your API gateway data planes.
With Kong Ingress Controller up and running, I'll expose the address of my minikube. There is no ingress rule defined. That means Kong doesn't know how to route this request. There’s no API, so Kong doesn’t know how to process it.
To tell Kong to process this request, I must create an ingress rule. I'll make an ingress rule that proxies the route a path to my sample application.
After refreshing, I see my sample application running through the ingress.
### ***Traffic Trace***
Injecting distributed tracing into each of your services will enable you to monitor and troubleshoot microservice behavior without introducing any dependencies to the existing application code.
In the GUI, you should see that distributed tracing is enabled.
Once some traffic comes through the gateway, I'll expose the tracing service to see those traces.
If I go to port 3000, I should see the Jaeger UI. I'm generating some traffic. I'll trigger a request, increment my Redis and refresh Jaeger. Now my traces are automatically showing up in Jaeger.
Now you should be able to visualize the spans and the time. You could push this through any system that gives you access to a service map. The below screenshot shows a basic service map that Jaeger provides.
## **Automate Service Mesh Observability With Kuma**
Whether you have a few services or thousands of services, the process of automating service mesh observability with Kuma would be the same.
We designed Kuma, built on top of the Envoy proxy, for the architect to support application teams across the entire organization. Kuma supports all types of systems teams are running on, including Kubernetes or virtual machines.
You may also want to demo our enterprise offering, [Kong Mesh](https://konghq.com/kong-mesh?utm_source=developer&utm_medium=blog&utm_campaign=community)Kong Mesh. Many customers use Kong Mesh for cost reduction initiatives. For example, some of our customers are taking advantage of the client-side load balancing that Kong Mesh provides so that they could get rid of load balancers. And in more than one situation, this has generated a seven-figure cost saving because they could eliminate load balancers in front of every service and instead leverage Kong Mesh for load balancing.
As an application developer, have you ever had to troubleshoot an issue that only happens in production? Bugs can occur when your application gets released into the wild, and they can be extremely difficult to debug when you cannot reproduce without
At first glance, that does not make sense, right? The title suggests you should invest your DevOps/Platform team’s time in introducing a new product that most likely will:
increase the complexity of your platform
increase resource usage
in
Kuma is configurable through policies. These enable users to configure their service mesh with retries, timeouts, observability, and more.
Policies contain three main pieces of information:
Which proxies are being configured
What traffic for t
Introduction One of the most common questions I get asked is around the relationship between Kong Gateway and Kuma or Kong Mesh . The linking between these two sets of products is a huge part of the unique “magic” Kong brings to the connectivit
Kuma is an open source, CNCF service mesh that supports every environment, including Kubernetes and virtual machines. In this Kuma service mesh tutorial, I will show you how easy it is to get started. [iframe loading="lazy" width="890" height="56
A year ago, Harry Bagdi wrote an amazingly helpful blog post on observability for microservices. And by comparing titles, it becomes obvious that my blog post draws inspiration from his work. To be honest, that statement on drawing inspiration fro
In his most recent blog post, Marco Palladino, our CTO and co-founder, went over the difference between API gateways and service mesh . I highly recommend reading his blog post to see how API management and service mesh are complementary patterns