When setting up Kubernetes for the first time, one of the networking challenges you might face is how to safely grant outside clients access to your cluster. By default, pods within a cluster can communicate with all other pods and services. You should restrict access to anything outside of that group.
In this post, we’ll take a closer look at how to introduce a process for monitoring and observing Kubernetes traffic using [Kuma](https://kuma.io)Kuma, a modern distributed control plane with a bundled Envoy Proxy integration.
## Setting Up a Kuma Service Mesh
Application stacks that run as individual containers need to communicate with one another and outside clients. To coordinate between all the requirements necessary to support such platforms—including security, routing and load-balancing—the concept of a **service mesh** emerged. The goal of a service mesh is to provide seamless management of any service on the network. Thus, while an ingress controller handles the behavior of incoming traffic, a service mesh is responsible for overseeing *all *aspects of the network, such as monitoring and configuration of the network.
Kuma is one example of a service mesh. It’s an open source project that works across various environments, including Kubernetes and virtual machines, and supports multi-zone deployments. Kuma is supported by the same team that built [Kong](https://github.com/Kong/kong)Kong, a popular API gateway that simplifies network communication. Kong has a vast plugin ecosystem that enables you to easily deploy and manage HTTP requests, responses and routes across your entire fleet. Kuma works hand-in-hand with Kong, but the two projects don’t rely on each other, as we’ll see below.
In addition to providing fine-grained traffic control capabilities, Kuma also offers rapid metrics and observability analyses. Being able to secure your networking access is only part of the solution. Since Kuma integrates with [Prometheus](https://prometheus.io)Prometheus for native data collection and [Grafana](https://grafana.com)Grafana for charting and viewing that data, you’ll be able to see precisely how your load balancing and client routing are behaving.
Installing Kuma is a snap. First, you can download and run the installer like so:
curl -L https://kuma.io/installer.sh | sh -
Then, switch to the installation directory:
cd kuma-1.1.2/bin
From here, you can run Kuma in multi-zone mode or standalone mode if Kuma is just in a single Kubernetes cluster. The command below will deploy Kuma in a single zone configuration, the default:
Depending on your needs, opting for a more customizable service mesh, like Kuma, can help you achieve your specific goals. For example, although Calico adheres to the [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies)Network Policies Kubernetes provides, [its format for setting up traffic rules](https://docs.projectcalico.org/security/service-accounts)its format for setting up traffic rules is more opaque than Kuma. Kuma provides a way of configuring [network](https://kuma.io/policies)network policies that run parallel to the first-class API Kubernetes provides. It should come as no surprise that Kuma is also compatible with CNI. This means you can easily swap out any network policies defined by Calico—or any project that uses a CNI-based protocol for Kuma's traffic rules. The main differentiator between such projects comes down to features. Kuma, for example, can act as a service mesh, an observability platform *and *a network policy manager all in one. Other projects may have different priorities, and it is the developer's responsibility to make sure they can all interact with one another properly.
## Architecting Traffic Policies in Kubernetes with Kuma
With Kuma set up and running on Kubernetes, let’s see how to establish traffic rules to manage incoming access.
Imagine the following scenario: an eCommerce platform that relies on two microservices that communicate to meet the business's needs—let’s call them services backend1 and backend2. A third microservice acts as a public API, and any incoming request to this service privately queries the other two. We’d like to expose the API to the public but keep the other two microservices isolated from external networks.
## One Control Plane for Security, Observability and Routing
The goal of any service mesh is to provide a single location to configure how your network behaves across your entire cluster. A service mesh can simplify much of the communication across disparate services. It’s often better to opt for a more restrictive network security rather than one which is open to any connection. Implementing a [zero-trust security policy](https://kuma.io/docs/1.1.6/policies/mutual-tls)zero-trust security policy with Kuma is a first-class feature, not an afterthought.
I hope you found this information on traffic policies in Kubernetes helpful. Get in touch via the [Kuma community](https://kuma.io/community)Kuma community or learn more about other ways you can leverage Kuma for your connectivity needs with these resources:
"To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee are announcing the upcoming retirement of Ingress NGINX . Best-effort maintenance will continue until March 2026. Afterward, there w
This blog addresses the common challenges organizations face with fragmented API management in Kubernetes environments and presents Kong Konnect combined with the Kong Ingress Controller (KIC) as a comprehensive solution. We'll highlight the issues
We recently released Kong Gateway Operator 1.4 with support for managing Konnect entities from within the Kubernetes clusters. This means users can now manage their Konnect configurations declaratively, through Kubernetes resources powered by Kong
APIs are essential to modern applications, but managing access and security policies can be complex. Traditional access control mechanisms can fall short when flexible, scalable, and fine-grained control over who can access specific resources is nee
Container technologies are always evolving — and we're not talking Tupperware here. Over the past years, service mesh has emerged as a crucial component for managing complex, distributed systems. As organizations increasingly adopt Kubernetes fo
Kubernetes is an open-source container orchestration system for automating deployment, scaling, and management of containerized applications. It groups containers into logical units for easy management and discovery. API gateways sit between client
In this blog post, we’ll demonstrate how easy it is to use Gateway API HTTPRoutes to route traffic to workloads deployed in different namespaces in a single Kubernetes cluster — a process that’s easier than ever. Previously, we only had Ingress API