What is Service Mesh Architecture?
A service mesh provides a consistent, decentralized mechanism for managing communication between multiple services within a system. It can be used to implement features such as encryption, logging, tracing and load balancing; thereby improving security, reliability and observability. If you’re new to the service mesh pattern, head over to An introduction to Service Mesh to find out more. In this article, we’ll dive deeper into understanding the architecture of a service mesh and how it can be used in practice.
Components of a Service Mesh Architecture
A textbook service mesh pattern consists of two main components: a data plane and a control plane. The data plane handles the actual forwarding of the traffic between services, whereas the control plane is in charge of configuring and coordinating the data planes, either reactively from policy changes or automatically. Let’s look at that in more detail.
What is the Data Plane?
In a service mesh architecture, the data plane is a network of proxies that are deployed alongside each instance of a service that needs to communicate with other services in the system. All network and API calls to and from such services goes through the data plane proxy, which can also apply authentication and authorization, encryption, rate limiting, load balancing, handle service discovery and increase observability via logging and distributed tracing capabilities.
A service mesh decouples the application logic from the network communication logic, with the effect that application programmers do not need to worry about the practicalities of communicating with the wider network. Instead, the service only needs to know about its local proxy (data plane). In a microservice system where separate teams are developing services in different languages, a service mesh allows network communication and associated features to be implemented consistently without duplicating effort across teams.
What is the Control Plane?
As we’ve seen above, the service mesh architecture requires a proxy for each instance of a service. In microservice-based systems comprising of dozens or hundreds of services, many of which deployed with replicas, introducing a service mesh also means introducing an equal number of proxies as there are instances in the system, and this is where the control plane comes in to help.
The control plane exposes an interface for a human user to configure the behavior of the data plane proxies using policies and makes that configuration available to the proxies via another API. (You may hear the user interface referred to as the management plane, but it is simply a view onto the configuration.) Each data plane proxy must connect to the control plane in order to register itself and receive configuration details.
What are Sidecar Proxies?
Because every call to a service goes via a proxy, the service mesh architecture adds an extra hop to every call. In order to minimize the additional latency, the proxy needs to be run on the same machine (virtual or physical) or in the same pod (in the case of containers) as the service for which it is proxying, so that they can communicate efficiently. This model is known as a sidecar deployment, hence the name “sidecar proxy”.
Service Mesh in Action
Although the service mesh architecture is not limited to microservice-based systems, they provide a good example of how a service mesh is used in practice. Let’s take the example of an online retailer that has adopted a microservice architecture covering functions such as inventory management, a shopping cart, payment processing and user accounts. More services are in development and the company has moved to a cloud-hosted infrastructure which will support their growth.
Using a service mesh to manage communications between the above services means the platform team can apply mutual TLS encryption to all traffic within the system. Encryption had previously been implemented for communication between some of the services but was missing from the inventory management service. Adding TLS support to that service requires the development team to re-implement the logic in C++ and due to other priorities, the work has been delayed so far. By introducing a sidecar proxy alongside each service instance and provisioning them from the same Certificate Authority, the service mesh pattern normalizes all inter-service traffic and gives the platform team a way to implement TLS and even mutual TLS encryption between services with minimal effort or evolvement from the development teams at all.
Now that the online retailer has moved to cloud infrastructure, they also want to be able to scale particular services when demand increases. An important role of a service mesh architecture is to provide “service discovery”: as new instances are brought online, their associated sidecar proxy registers themselves with the control plane, which updates their configuration and exposes these new sidecar proxies to the ones already deployed. The control plane is able to keep track of all sidecar proxies that are part of the system at any given time, and therefore the control plane has a holistic view of the services and traffic’s health and can play the role of a “circuit-breaker”: making routing and scaling decisions reactively and without human intervention.
The development teams also need more insight into the behavior of the system to assist in debugging issues quickly. Because the platform team enabled tracing in the system via the control plane, each data plane sends its traffic metrics to a separate log store, which can be accessed by the development team. Every single internal or external API call or database query made within the system can be tracked as such and browsed later for debugging quality control, or other data-driven purposes. Once again, no particular effort was required out of the development teams for this policy to take effect system-wide, only a configuration push from the control plane to all of its connected data planes.
Implementing a service mesh to manage communication within a system can speed up the deployment of microservices by providing a consistent approach that handles key networking features. The service mesh architecture is ideal for systems with a large number of services as it decouples networking concerns from application code and allows policies to be applied from a central source of truth either universally or selectively based on specified criteria. To find out more about setting up your own service mesh, have a look at Implementing a Service Mesh.
What is a Service Mesh Architecture?
The architecture of a service mesh consists of network proxies deployed alongside each replica of a service and a control plane to configure their behavior. All proxies communicate with each other and with the control plane, whereas each instance of a service only communicates via its local, transparent proxy.
Is Kong a Service Mesh?
Kong Mesh is a service mesh built on top of CNCF’s Kuma (for the control plane) and Envoy (for the data plane). Kong Mesh can be used with services hosted on bare metal, VMs and containers in both multi-cloud and multi-cluster deployments. Native integration with Kong API Gateway and Kong Enterprise allows you to manage internal and edge traffic seamlessly.
What are the functions of a Service Mesh?
The primary function of a service mesh is to manage communication between services within a system. Proxies are deployed as sidecars to each instance of a service and intercept all calls to and from the service.
In addition, a service mesh can implement various networking features, such as enforcing mutual TLS encryption, service discovery, health checks and circuit breakers, load balancing, logging and tracing.
What components make up a Service Mesh?
A service mesh is made up of a data plane and a control plane. The data plane consists of network proxies which are typically deployed alongside each instance of a service. The control plane coordinates the behavior of the proxies and exposes an API that allows users to configure the service mesh.
Want to learn more?
Request a demo to talk to our experts to answer your questions and explore your needs.