Defining Service Mesh
This post will be the first in a series explaining the architecture and use of Kong’s service mesh deployment. Having a good, shared definition of the term “service mesh” is important as we dig deeper into Kong’s service mesh capabilities specifically, and this post lays that foundation. We’ll be publishing more blog posts on Kong’s service mesh features soon – stay tuned.
What is a Service Mesh?
So, what is a service mesh? The term means different things to different people. Let’s first clarify that service mesh is not a product or a feature – it’s a new pattern for inter-service communication. Service mesh isn’t a binary characteristic that is either “present” or “absent” – you can implement it incrementally and probably should. We can’t define that exact moment when a “not service mesh” becomes “service mesh.” Here at Kong, we think of the term “service mesh” in a specific way.
Service mesh is a way of solving security, reliability and observability problems that occur when multiple services communicate with each other within a given computing environment. It does this by routing inter-service communications through local proxies, without requiring changes to the applications themselves.
Let’s break apart that statement to add clarity:
- …a way of solving security, reliability and observability problems… It is important to remember that we are going to the effort of implementing a service mesh because we have specific security, reliability and observability problems that we need to solve. Note that we haven’t yet enumerated which types of security, reliability and observability problems a service mesh can help solve. A service mesh is not the only way to solve these sorts of problems – though it can be the best way for many situations.
- …that occur when multiple services… The type of problems we are solving are typically encountered when there are a multitude of services. These can include monoliths, mini-services, microservices or serverless functions that are communicating with one another. If we don’t have a multitude of services or those services aren’t communicating with one another, then a service mesh is not going to help solve the types of problems we have.
- …within a given computing environment… Deploying and managing a service mesh requires that a given company, department or engineering team have the authority and access necessary to deploy and manage local proxy code on all the hosts that are running services in the mesh. The “mesh managers” need access to configure certain aspects of the hosts themselves. If your applications communicate only with third-party APIs (and not with each other) or with applications whose hosts are outside your sphere of control, you won’t be able to install, configure and benefit from the elements necessary to bring all those APIs and remote applications into a helpful service mesh.
- …inter-service communications… Service mesh solves problems that arise when services communicate with one another. There is a whole separate class of problems related to the security, reliability and observability of the processes and communications that happen within a given service that service mesh doesn’t help to solve. However, one way to solve intra-service problems in a monolithic application is to refactor it into mini-services and microservices. Once you break apart a big application into multiple smaller services, you move problems from being intra-application to inter-application, and service mesh can then help to solve them.
- …through local proxies… In a service mesh, a proxy runs on the same host as each service in the mesh. These proxies act as “choke points” where the proxy can enforce security policies, enhance reliability (with circuit breakers, health checks, rate limiting, retries, load balancing, etc) and collect telemetry, logs or tracing data. When we say “service mesh,” we mean “only local proxies.”
- …without requiring changes to the applications. Theoretically, you could build all the functionality described above into each service separately. If you could change every service in your application, it’s possible that you could solve the problems like a service mesh does, without using any local proxies. However, this would introduce other problems: different implementations for each service, no way to enforce policies or the need to update every service to make small changes to how they all communicate. This would be a larger effort than implementing a service mesh and would become more difficult as the services themselves changed and multiplied. Service mesh is popular because it makes applications more secure, reliable and observable, all without requiring changes to every service or coordination between the teams that build them.
Next up, how to:
Now that we have a shared definition for the term “service mesh,” we can discuss how to incrementally deploy Kong’s service mesh capabilities to improve observability, reliability and security related to your application. This is the topic for the next post, Steps to Deploying Kong as a Service Mesh.
Want to learn more about service mesh?
Find us at KubeCon + CloudNativeCon North America next week at Booth S33!