Planning and Performing a Service Mesh Implementation

Service mesh is a pattern for implementing communication between services in a microservice-based system. A service mesh consists of a network of proxies (data planes) deployed alongside each service and forwarding traffic to and from other services in the system, and a control plane to manage the mesh configuration and coordinate proxies’ behavior. See What is a Service Mesh for an introduction to the service mesh pattern and check out Understanding Service Mesh Architecture to learn more about the architecture.

In a service mesh all inter-service communication is routed through proxies, which decouples the network communication functions from each service’s application code. As well as freeing up development teams to work on their service’s core features, this allows various network functions to be implemented and managed consistently across the system.

A service mesh can be used to enforce mutual TLS on all traffic between services, as well as handling service discovery, health checks and circuit breakers, load balancing, rate limiting, and implementing logging and tracing to facilitate debugging. To learn more about the benefits of a service mesh, have a look at Why Service Mesh.

Is a Service Mesh Right for You?

Before considering the implementation of a service mesh, it’s worth being clear that the pattern is only suitable if your system includes services that need to communicate with each other over a network, such as multiple loosely coupled microservices deployed to a cluster of machines. 

Because a service mesh requires a proxy (the data plane component) to be deployed alongside every instance of each service, you must have access to the host machines or containers running the services that will be connected by the mesh.

If services need to communicate with components outside of the mesh (meaning they don’t have a provisioned sidecar proxy of their own), forwarding and normalization of the traffic may require special care. A tool such as an API Gateway can help here.

As part of proxying calls between services, a service mesh can provide a set of related functions ranging from service discovery to distributed tracing. However, these features can also be implemented as part of the application code or by using other tools. How do you decide whether a service mesh will benefit your system?

A service mesh adds value by decoupling networking concerns from application code and applying functionality consistently and reliably. For systems made up of numerous services written in different languages, implementing these services as part of the application can sometimes involve considerable duplication of effort.

On the other hand, as a relatively new technology, service mesh involves an initial learning curve and needs to be maintained over the long term like any other platform. If your organization has a dedicated platform or SRE team then using a service mesh will likely save everyone some effort in the long run. However, in smaller development teams a service mesh can be more overhead than it’s worth. Also bear in mind the additional computing resources required to support the control plane and additional proxies.

Fortunately, whether or not to adopt a service mesh pattern isn’t necessarily a decision that has to be made early in the development process. As your system grows in size and complexity due to its number of services, range of languages, and distributed infrastructure needs, the benefits that a service mesh can provide may outweigh the additional cost of managing it. It is then that you can start implementing a service mesh on a small number of services and expand from there.  

Choosing a Deployment Model

Service mesh is often mentioned in the same breath as Kubernetes, and while the pattern is well suited to containerized deployments, it can also be applied to other deployment models.

The simplest approach is to apply a service mesh to a group of services deployed to a single cluster of physical or virtual machines, either directly or using containers. For systems deployed across multiple clusters, you can set up a service mesh in each cluster and coordinate them via a central control plane.

Another option is to create multiple service meshes within a single cluster. This is useful if separate systems managed by different parts of an organization are deployed to the same infrastructure, perhaps communicating with each other via an API gateway.

Choosing a Service Mesh Platform

When choosing a service mesh platform, it is important to consider your system’s current and future deployment model. Some service mesh implementations are limited to particular container orchestration platforms and cannot be run on VMs or bare metal infrastructure. Not all options support multi-cluster or multi-mesh deployments, which may constrain your growth in the future.

In addition to deployment models, performance is a key consideration. A service mesh inevitably adds extra hops to each call within the system, but if the platform noticeably slows down request and response time it can undermine the benefits of improved reliability and consistency.

Kong Mesh is a service mesh platform that is compatible with containerized deployments, VMs, and bare metal infrastructure, consisting of either one or multiple clusters. Built on top of Kuma and Envoy for powerful configuration combined with a lightweight data plane, Kong Mesh provides a simple turnkey installation and the flexibility to adapt as your system evolves. 

Policies are configured from the command line tool and can be used to implement mutual TLS, health checks, circuit breakers, load balancing, fault injection, and traffic logging and tracing. Mutual TLS is backed by either a built-in or user-provided certificate authority, or a root certificate and key stored in a third-party HashiCorp Vault server.

Installing and Setting Up

Implementing a service mesh on your system involves installing the control plane components, injecting the data plane as a sidecar to each service instance and connecting them to the control plane. You can then configure the service mesh behavior from the control plane using policies. The platform you choose will determine the exact steps involved.

Kong Mesh simplifies the installation and setup process by bundling all the control plane components and the data plane executable so you don’t have to deploy them individually. On installation, it creates a default mesh for the data plane proxies to connect to. 

When installing on Kubernetes, Kong Mesh automatically creates the data plane entities, whereas on VMs and bare metal deployments the proxy is manually installed alongside each service and registered with the control plane using a YAML definition file.

The behavior of the service mesh is configured using policies. Kong Mesh provides a command line tool, kumactl, for VM and bare metal deployments, and integrates with kubectl for Kubernetes deployments.

To learn more about implementing a service mesh with Kong Mesh, head over to the documentation.


Post-Production Testing

Once you’ve got your service mesh up and running, it’s time to monitor the system’s health. Kong Mesh includes a REST API and GUI that you can use to review the details of your mesh, including details of the proxies and traffic routes.


Adding a service mesh to a system allows network connectivity to be managed consistently, enables zero-trust security with mutual TLS, and improves the observability of your system. While a service mesh may not be necessary in the early days of a system, decoupling network functions from application code can be advantageous as complexity increases.

When choosing a service mesh, it is worth considering how your system and the infrastructure that supports it are likely to evolve over time. Choosing a service mesh platform that supports multiple deployment models and is simple to deploy will provide great flexibility in the future.


How do you implement a Service Mesh?

To implement a service mesh you need to deploy the control plane components to a machine in the cluster, inject the data plane proxy alongside each service replica, and configure their behavior using policies. In a multi-cluster or multizone deployment, a global control plane is used to coordinate across clusters.

What is required to set up a Service Mesh?

The service mesh pattern is only relevant to systems made up of multiple services that communicate over a network. You must have access to all machines or containers that host the services making up the system so that you can deploy the network proxy on them.

What services and tools are required to get started?

A service mesh requires a network proxy deployed alongside each instance of a service, together with a control plane to configure and coordinate the mesh. The control plane comprises multiple components, including an API and command line tool to apply policies, a database to store configuration details, APIs to distribute configuration details and collect metrics, a database to store log and trace data, and a DNS resolver and ingress to manage traffic in multi-cluster deployments. Kong Mesh bundles these elements together for a simple installation and setup process.

Want to learn more?

Request a demo to talk to our experts to answer your questions and explore your needs.