What Is Service Mesh Architecture?

A service mesh provides a consistent, decentralized mechanism for managing communication between multiple services within a system. It can be used to implement features such as encryption, logging, tracing and load balancing, thereby improving security, reliability and observability. If you’re new to the service mesh pattern, head over to An introduction to the Service Mesh to find out more. In this article, we’ll dive deeper into understanding the architecture of a service mesh and how it can be used in practice.

Components of a Service Mesh Architecture

A service mesh consists of two elements: the data plane and the control plane. As the names suggest, the data plane handles the actual forwarding of traffic, whereas the control plane provides the configuration and coordination. Let’s look at that in more detail.

What Is the Data Plane?

In the service mesh architecture, the data plane refers to network proxies that are deployed alongside each instance of a service that needs to communicate with the other services in the system. All calls to and from a service go through the proxy, which can also apply authentication and authorization, encryption, rate limiting and load balancing, handle service discovery, and implement logging and tracing.

A service mesh decouples the application logic from the network communication logic, with the effect that application programmers do not need to worry about the practicalities of communicating with the wider network. Instead, the service only needs to know about its local proxy. In a microservice system where separate teams are developing services in different languages, a service mesh allows network communication and associated features to be implemented consistently without duplicating effort.

What Is the Control Plane?

As we’ve seen above, the service mesh architecture requires a proxy for each instance of a service. In microservice-based systems comprising dozens or even hundreds of services, each of which may be replicated multiple times according to performance and reliability requirements, that results in an equal number of proxies to manage. This is where the control plane comes in.

The control plane exposes an interface for a human user to configure the behavior of the data plane proxies using policies and makes that configuration available to the proxies via another API (you may hear the user interface referred to as the management plane, but it is simply a view onto the configuration). Each data plane proxy must connect to the control plane in order to register itself and receive configuration details.

 

What Are Sidecar Proxies?

Because every call to a service goes via a proxy, the service mesh architecture adds an extra hop to every call. In order to minimize the additional latency, the proxy needs to be run on the same machine (virtual or physical) or in the same pod (in the case of containers) as the service for which it is proxying so they can communicate over localhost. This model is known as a sidecar deployment, hence the name “sidecar proxy.”

 

 

Service Mesh in Action

Although the service mesh architecture is not limited to microservice-based systems, they provide a good example of how a service mesh is used in practice. Let’s take the example of an online retailer that has adopted a microservice architecture covering functions such as stock control, shopping cart management, payments and user accounts. More services are in development, and the company has moved to cloud-hosted infrastructure to allow them to grow faster.

Using a service mesh to manage communications between these services means the platform team can apply mutual TLS encryption to all traffic within the system. Encryption had previously been implemented for communication between some of the services but was missing from the stock control service. Adding it to that service would have required the development team to re-implement the logic in C++, but due to other priorities, that work had not yet been done. By deploying a sidecar proxy to each service instance and provisioning them from the same certificate authority, the platform team is able to get all inter-service traffic encrypted using mutual TLS. The other development teams no longer have to maintain that part of their codebases.

Now that the online retailer has moved to cloud infrastructure, they want to be able to scale particular services when demand increases. The service mesh provides the service discovery function: as new instances are brought online, the proxy registers the instance with the control plane, which updates the configuration and makes it available to other proxies. The platform team uses the service mesh control plane to implement circuit breakers to prevent traffic from being routed to faulty instances.

The development teams also need more insight into the behavior of the system to assist in debugging issues quickly. By enabling trace logging to a separate log store, they can access details of every single call made within the system and use the data to unpick problems faster.

Conclusion

Implementing a service mesh to manage communication within a system can speed up deployment of microservices by providing a consistent approach that handles key networking features. The service mesh architecture is ideal for systems with a large number of services as it decouples networking concerns from application code and allows policies to be applied from a central source of truth, either universally or selectively based on specified criteria. To find out more about setting up your own service mesh, have a look at Implementing a Service Mesh.

FAQs

What is a service mesh architecture?

The architecture of a service mesh consists of network proxies deployed alongside each replica of a service and a control plane to configure their behavior. All proxies communicate with each other and with the control plane, whereas each instance of a service only communicates with its local proxy.

Is Kong a service mesh?

Kong Mesh is a service mesh built on top of CNCF’s Kuma (for the control plane) and Envoy (for the data plane). Kong Mesh can be used with services hosted on bare metal, VMs and containers in both multi-cloud and multi-cluster deployments. Native integration with Kong’s API gateway and Kong Konnect allows you to manage internal and edge traffic seamlessly.

What are the functions of a service mesh?

The primary function of a service mesh is to manage communication between services within a system. Proxies are deployed as sidecars to each instance of a service and intercept all calls to and from the service.

In addition, a service mesh can implement various networking features, such as enforcing mutual TLS encryption, service discovery, health checks and circuit breakers, load balancing, logging, and tracing.

What components make up a service mesh?

A service mesh is made up of a data plane and a control plane. The data plane consists of network proxies which are typically deployed alongside each instance of a service. The control plane coordinates the behavior of the proxies and exposes an API that allows users to configure the service mesh.

Want to learn more?

Request a demo to talk to our experts to answer your questions and explore your needs.