See what makes Kong the fastest, most-adopted API gateway
Check out the latest Kong feature releases and updates
Single platform for SaaS end-to-end connectivity
Enterprise service mesh based on Kuma and Envoy
Collaborative API design platform
How to Scale High-Performance APIs and Microservices
Call for speakers & sponsors, Kong API Summit 2023!
< 1 MIN READ
In this episode of Kongcast, I spoke with Scott Lowe, principal field engineer at Kong, about what a service mesh does and when to use it, among other common mesh-related questions.
Check out the transcript and video from our conversation below, and be sure to subscribe to get email alerts for the latest new episodes.
Kaitlyn: Could you just give us a quick introduction of what exactly service mesh is and how it relates to common connectivity challenges?
Scott: The idea behind a service mesh is that we have a trend toward applications decomposing into different services and components. A component that does user registration, lookups or reviews, and then one piece that pulls it all together from all these other back-end services.
And all these services need to communicate with one another, but they want some common functionality. They want to authenticate the service traffic to ensure that it is who it claims and prevent unauthorized traffic or something of that nature.
They want to control what kind of API calls might be being made between services. The idea behind a service mesh is to take all that functionality and build it into the underlying application platform so that application developers don’t have to write that functionality themselves inside the applications they’re developing. Then, they can focus instead on the functionality and the features that add value for their business.
Kaitlyn: One of the common questions we get is how does this compare to an API gateway?
Scott: The best way to think about it – at least the way I think about it – is that an API gateway is more tailored to service traffic. You might also see it referred to as north-south traffic. That’s traffic coming from consumers of an application or consumers of a set of services. This would typically be traffic originating outside your data center or cloud environment. It might be coming from another part of your data center or cloud environment from another application. But in any case, it’s traffic from outside the service mesh.
Then, once that traffic gets onto the service mesh, the service mesh is responsible for what we call service-to-service traffic or east-west traffic. That’s where individual components of a larger application begin to communicate with one another. We want to apply things like traffic routing and rate limiting and that sort of thing to that traffic.
So think of it as the API gateway handles north-south user or service traffic and the service mesh handles service-to-service or east-west traffic.
Kaitlyn: We talk about service mesh a lot like it’s this new thing, but the concept has been around for quite a while. Can you talk about how this compares to other networking tools that have come before it?
Scott: I have a networking background. I spent some time in the networking space in previous roles. We have Request for Comments (RFCs) in the networking world. It’s common in the design of internet protocols and stuff. And there’s just one RFC called the Twelve Networking Truths. It’s one of a series of what we call joke RFCs.
And truth number 11 is that every old idea will be proposed again with a different name in a different presentation, regardless of whether it works. This is in the context of trying to be funny and network engineers trying to be funny, but if you think about it, if you’ve been in the industry for any real length of time, you’ve seen these trends that sort of come and go and then they come back again. They might be slightly different. There might be some new technology attached to it. And there might be real value there. But it’s still a reincarnation of something previous or an evolution of something that came previously. That’s kind of how I view service mesh.
We had this technology trend about decoupling logical networking from physical networking. Whatever your physical network topology looks like didn’t matter to the applications or workloads running on top of it. Kubernetes itself is a form of that decoupling of the network topology, where we have these overlay networks where pods talk to each other. And that doesn’t matter regarding the underlying network structure of the underlying network topology.
Service mesh is an evolution of those technologies. And it doesn’t focus so much on the lower-level networking stuff. But again, it brings in application awareness and higher-level functionality.
This includes things like service-to-service authentication or mutual TLS (mTLS), which is encryption and authentication. Those kinds of things require the service mesh to be higher in the stack and more aware of the applications and services it’s communicating with, as opposed to some of these previous iterations, which were lower in the stack and just focused on an IP address or something of that nature. It’s an evolution of this decoupling of logical networking and bringing in more application awareness and more functionality.
Kaitlyn: Can you walk us through some of the benefits of service mesh specifically for the user?
Scott: The answer to that will depend on how you define user. Let me give you two perspectives on that.
First, if you define a user as someone consuming your application, like ordinary Joe out there consuming an application that you host via an app on his phone or something of that nature. If you implement a service mesh on the backend, Joe won’t see a whole lot of difference, to be honest. If you do everything correctly, he might see that your app is faster. He might have fewer notifications that a data breach occurred, a security flaw or something of that nature. He might see that your app is more reliable than some other apps. And that all depends on a whole huge variety of factors.
On the other hand, if you define your user as an application developer or platform operator, which is somebody within your organization responsible for making sure your applications are available. They work well. They’re performant. And they’re secure, then the benefits that a service mesh brings are all of these things that it can do that don’t require you to build that functionality into the application itself.
For example, when we decouple an application into multiple services, aka the microservices-based approach, we might want to ensure that one part of the application or service handles a particular part of the application when communicating with another part of the application. It should verify the identity of those services. We don’t want somebody to accidentally or intentionally spin up a malicious application and then say, “Hey, look, I’m service A,” and then begin communicating and sending that data. We want to ensure that service A is actually service A.
You can do that by building that functionality into the application. Then you have to fill that functionality repeatedly in every one of the services. Instead, what we can do is we can consolidate that functionality into the service mesh. Then application developers and platform operators can take advantage of that functionality over and over and over again. Still, it’s consolidated in the service mesh. That applies to all the various features and functions that a service mesh can provide, whether it’s service-to-service authentication, mTLS, rate limiting or traffic routing.
Kaitlyn: To dive in even a little bit deeper here to the benefits, can you talk about why service mesh versus any other approach to doing this?
Scott: Well, the interesting thing here is that the industry doesn’t have any other solution for doing this so far. Service mesh is it. You might see different variations on a service mesh. You might see different technologies being used to implement the idea of a service mesh or variations on it in terms of whether it only supports containers or containers and VMs or what orchestration platforms it supports–Kubernetes or Kubernetes and other platforms. But in the end, they’re all a service mesh because they all do that same sort of thing.
They all provide that same sort of service-to-service traffic flow, traffic routing, traffic shaping, rate limiting, authentication, etc. It’s just a matter of which components they mix up to do that and which technologies they use. Whether they use the open source Envoy proxy or something else, in the end, it’s all service mesh. We haven’t really seen the industry create another alternative to service mesh, to be honest, at least not as far as I am aware.
What Functionalities Sit in a Service Mesh Layer?
Kaitlyn: That’s a fantastic way of putting it. And then just one more question for you before we dive in and see this hands-on. And you’ve talked about this in a few ways already, but can you give a few examples of what types of functionality sit in that service mesh layer?
Scott: We have things like what we call AuthN/AuthZ. Those are shorthand ways of saying authentication and authorization to do things like service-to-service authentication and authorization. The difference between those two is authentication is verifying that the service is who the service says it is. And authorization means the service is allowed to do what it’s trying to do, like calling an API or something of that nature.
We have rate limiting. We have things like advanced traffic routing, being able to route traffic at layer four and a higher level—what we call layer seven, which would be like on a particular path or URL or API call, for example.
When we bring in something like mTLS, we get encryption and encrypt the wire traffic to be more secure. We also get that authentication mechanism where services are going to say, “Hey, you claim to be service A, your certificate being used for mTLS says your service A. OK, I can trust that the identity is correct, and you’re allowed to connect and communicate.” So we have all these kinds of features.
We also gain some advanced visibility or observability of what’s going on in your environment. We can expose additional metrics about the traffic or insert tracing.
We can do limited forms of what you call chaos engineering, where we can inject errors into the traffic for application developers to test how their application behaves or how it gracefully degrades in the face of errors or outages. So there’s a lot of functionality that you can see that’s baked into a service mesh.
Kaitlyn: Yeah, that’s super helpful. As you said, there’s just so much that can go into that service mesh layer. It’s kind of nice to see it laid out like that.
In the demo, Scott showed off one of the things you can do with service mesh: traffic permissions, which is the ability to control who or what is allowed to communicate with other services. This is a more fine-grained way of enforcing access control within your applications than relying on network address ranges or traditional firewall technology.
I hope you’ll join us again on December 27 for our next Kongcast episode with Jason Yee from Gremlin called “Embracing Failure with Chaos Engineering.”
Until then, be sure to subscribe to Kongcast to get episodes sent to your inbox (and a chance to win cool SWAG)!