Kuma is configurable through policies. These enable users to configure their service mesh with retries, timeouts, observability, and more.
Policies contain three main pieces of information:
Which proxies are being configured
What traffic for these proxies this configuration applies to (i.e: inbound, outbound, or even a subset of the directional traffic)
The actual configuration to apply
Kuma 2.0 introduces a new matching API that's more understandable and powerful. In this article, we explain why we’re doing this, how to use the new policy matching API, and what’s coming next.
This policy will retry failed requests for any traffic from web_default_svc_80 to backend_default_svc_80.
But the current API has some issues.
It’s unclear whether a policy is inbound (applying to traffic coming in the service) or outbound (applying to traffic coming out of the service).
This makes it unclear which proxy configuration a policy modifies.
In the example above without further context it’s not possible to say whether the proxy configuration of web_default_svc_80 or of backend_default_svc_80 is being configured.
On applied policies we can use the Inspect API but it doesn't help when first creating a policy.
Composing policies is also challenging because of shadowing and ordering (see #2417). Shadowing happens when different policies have the same selector. This exists because we don't currently have a way to merge policies.
Introducing targetRef
One of the primary goals of Kuma is ease of use.
It became obvious that the policy matching API — whilst simple — wasn't as powerful as we wanted it to be. Therefore, after some discussion we decided to rewrite it.
MeshServiceSubset: Like MeshService with extra tags to select a subset of all dataplanes. For example you could pick only dataplanes with the tag version: v2.
These targetRef are used in three possible places:
Top level: the subset of proxies affected by this policy
From: the subset of the incoming traffic to apply the configuration to
To: the subset of the outgoing traffic to apply the configuration to
All policies have a common architecture, and depending on the policy type will have a from or a to section.
The architecture of a policy looks like:
apiVersion: kuma.io/v1alpha1
kind: MeshTimeout
metadata:name: my-timeout
namespace: kuma-system # Policies are now namespaced.labels:kuma.io/mesh: default # optional meshspec:targetRef:# (1) top Level targetRef, defines which dataplanes are getting their configuration modified by this policykind: MeshSubset
tags:with-timeout: v1
to:# a list of configuration to apply to a subset of the outgoing traffic-targetRef:# (2)kind: MeshService
name: outgoingServiceA
default:# actual configurationhttp:requestTimeout: 5s
-targetRef:# (3)kind: MeshService
name: outgoingServiceB
default:http:requestTimeout: 2s
from:# a list of configuration to apply to a subset of the incoming traffic-targetRef:# (4)kind: Mesh
default:# actual configuration http:requestTimeout: 1s
With this top-level targetRef (1) the policy only affects data plane proxies with the tag with-timeout=v1.
When applied, it will set different timeouts for incoming and outgoing traffic:
Requests for outgoingServiceA (2) have a 5 second timeout.
Requests for outgoingServiceB (3) have a 2 second timeout.
All other outgoing requests will inherit the default timeout.
On the receiving side (4), we’ll have a timeout of 1 second regardless of the source service.
The following schema summarizes this:
As you can see with this new policy it is easy to understand:
Which data plane proxies are getting configured thanks to the top level targetRef
What traffic for these proxies is affected thanks to from and to targetRef
What actual configuration to apply thanks to the default inside the to or the from
Merging
Now that we’ve shown for a single policy, we’ll describe what happens when many policies of the same type are at play.
This is where Kubernetes Gateway API proposal GEP-713 was heavily used as an inspiration.
All targetRef kinds are ordered using the following rules:
Other policies will roll out in Kuma 2.1 and we maintain an equivalence table in the docs.
You can check the progress in the umbrella github issue #5194.
All designs are MADRs and your opinion and contributions are very welcome.
We are using this to help the Kubernetes GAMMA initiative by sharing our experience.
These new policies are Beta and you should try them out. Yet, mixing new and old policies of the same type is currently undefined behavior. Migration strategies and tooling will come in future releases of Kuma.
This post described what policies are, the shortcomings of the existing API, and introduced its successor. We hope you'll enjoy this improvement.
If you have any questions feel free to ask the Kuma Community on Slack or join the monthly community call.
Kong Mesh 2.13 delivers full support for Mesh Identity for Kubernetes and Universal mode. Plus, it's been designated as a Long Term Support release, with support for a total of 2 years. But first, what's Kong Mesh for the uninitiated? Built on top
Justin Davies
Stop Wasting Your Engineers’ Time and Start Improving Your System Stability with Kuma
At first glance, that does not make sense, right? The title suggests you should invest your DevOps/Platform team’s time in introducing a new product that most likely will:
increase the complexity of your platform
increase resource usage
in
Marcin Skalski
Debugging Applications in Production with Service Mesh
As an application developer, have you ever had to troubleshoot an issue that only happens in production? Bugs can occur when your application gets released into the wild, and they can be extremely difficult to debug when you cannot reproduce without
Introduction One of the most common questions I get asked is around the relationship between Kong Gateway and Kuma or Kong Mesh . The linking between these two sets of products is a huge part of the unique “magic” Kong brings to the connectivit
The more services you have running across different clouds and Kubernetes clusters, the harder it is to ensure that you have a central place to collect service mesh observability metrics. That's one of the reasons we created Kuma , an open source
Kuma is an open source, CNCF service mesh that supports every environment, including Kubernetes and virtual machines. In this Kuma service mesh tutorial, I will show you how easy it is to get started. [iframe loading="lazy" width="890" height="56
In his most recent blog post, Marco Palladino, our CTO and co-founder, went over the difference between API gateways and service mesh . I highly recommend reading his blog post to see how API management and service mesh are complementary patterns
Kevin Chen
Ready to see Kong in action?
Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.