In this blog post and the video below, we’re going to jump right into breaking down the relationship between these products and how you can use them together. First, let’s break down a couple of the terms that are involved.
Kuma and Kong Mesh ship with a *gateway mode* that disables the *external* Envoy listener for the service, making it resolvable inbound. We can bolt on any API gateway (ideally, Kong Gateway, as mentioned above) as our entry-point into the mesh using this functionality. This functionality requires that you’ve deployed your API gateway in a way that gives it an Envoy sidecar service, acting as the *data plane*.
In Kuma/Kong Mesh’s *Kubernetes mode*, this is handled through a specific set of annotations that are applied to your deployment YAML (as part of the deployment object or pod object:
**Note:** The sidecar injection can be applied in the deployment/pod object or against the namespace as a whole. If it’s deployed at the namespace level, you can omit the annotation here.
In *Universal mode*, we handle this configuration through the `kuma-dp` configuration file:
...
networking: address:10.0.0.1 gateway: tags: kuma.io/service: kong
...
The `kuma-dp` process is the actual sidecar process executing, so there is no need to include an injection annotation like in Kubernetes.
With these configurations applied, the service in question will be marked in Kuma/Kong Mesh as an API gateway service and allow inbound connectivity.
## Understanding Kong Gateway
Kong is most widely known for delivering the most powerful [API gateway](https://github.com/Kong/kong)API gateway in the market today. This API gateway provides the smoothest process to get workloads into a given environment, whether it’s a mesh environment or otherwise. The API gateway acts as the user’s front door to consuming everything from APIs to general web content.
[Kong Ingress Controller](https://konghq.com/solutions/kubernetes-ingress)Kong Ingress Controller (KIC) is based on Kong Gateway and designed to operate in a Kubernetes-native way. It deploys ready-to-go for service mesh functionality and can be used to quickly get started with an inbound gateway in your environment. In a future release, we will include this functionality in the `kumactl install` commands.
KIC comes configured out-of-the-box to act as a gateway (the `kuma.io/gateway: enabled` annotation is already applied), so you can get started using it with Kuma/Kong Mesh quickly. To leverage KIC with a single-site mesh environments, you’ll want to reference the existing Kubernetes service name for the service registered in Kuma as the backend for the ingress. Kuma requires that you create a service whenever a mesh object is deployed. You can see an example of this ingress spec below.
apiVersion: v1
kind: Service
metadata: name: front
namespace: kong
spec: type: ExternalName
externalName: frontend.kong.svc.80.mesh
In this example, you can see that I’m creating a service named `front` that references an ExternalName object that is `frontend.kong.svc.80.mesh`. This service tells the Kubernetes cluster that any request coming into the `front` service name should ultimately reference the external name we’ve specified. This name is a .mesh address, which will send traffic through KIC’s Envoy sidecar. This resolution comes from Kuma or Kong Mesh’s internal DNS.
At a high level, service meshes work by moving the connectivity logic outside of the application and feeding it through proxies that live alongside the application. We commonly refer to these as *sidecars* or *data planes*. When we move the communication logic into these proxies, it allows us to add additional capabilities on top of that communication. We get capabilities like traffic routing control. We can get a greater level of visibility into the workload communication patterns (observability), we can encrypt traffic end to end influence the application communication via policies.
These proxies (sidecars/data planes) receive their configuration from the *control plane* of the service mesh. When we apply policy or configuration details, those configurations are read into the service mesh, translated to configurations that Envoy understands and delivered to the proxies in the environment.
In addition, the control plane also delivers connectivity details for other proxies in the environment – enabling the proxies to communicate with each other. This interaction model is the core of the “decentralized” model that we talk about in service mesh, and it’s one of the strongest aspects of the platform.
### Better Together
We’ve built Kuma and Kong Mesh to be highly extensible, so you can integrate the tools you have already operationalized in your environment with service mesh. Fronting service mesh with Kong Gateway will give you the greatest level of flexibility and functionality as you look to create end-to-end connectivity in your environment.
Kong Mesh 2.13 delivers full support for Mesh Identity for Kubernetes and Universal mode. Plus, it's been designated as a Long Term Support release, with support for a total of 2 years. But first, what's Kong Mesh for the uninitiated? Built on top
Many organizations struggle with managing API gateways across multiple cloud environments. In this blog post, we'll explore how Kong Mesh can solve these challenges and enable seamless cross-cloud connectivity. The challenge of multi-cloud API gatew
Deploying Kong Mesh on ECS The focus of this blog is to provide step-by-step instructions for deploying and configuring Kong Mesh with Kong Konnect on an AWS ECS instance so that anyone will be able to get pre-production installation of Kong Mesh st
Two of the main tenets of Zero Trust are encryption between services and managing the connections each service is allowed to use. Achieving this generally falls to running a service mesh in a Kubernetes cluster. Refactoring applications to run prope
By now, when we hear the words "service mesh" we typically know what to expect: service discovery, load balancing, traffic management and routing, security, observability, and resilience. So, why Kong Mesh? What does Kong Mesh offer that would be mo
At first glance, that does not make sense, right? The title suggests you should invest your DevOps/Platform team’s time in introducing a new product that most likely will:
increase the complexity of your platform
increase resource usage
in
Kuma is configurable through policies. These enable users to configure their service mesh with retries, timeouts, observability, and more.
Policies contain three main pieces of information:
Which proxies are being configured
What traffic for t