Control Plane vs. Data Plane - What's the Difference?

If you're diving into Kubernetes or you're getting started with a service mesh, you have likely encountered the terms "control plane" and "data plane." What do these terms mean? Do they refer to the same things in Kubernetes as they do in a service mesh? If you've encountered difficulty searching for a straight answer, look no further. Ultimately, the terms "control plane" and "data plane" are all about the separation of concerns—that is, a clear separation of responsibilities within a system. The terms were originally used in a networking context, but more recently have come to be used within the infrastructure and platform service spaces.

Networking Back In The Day

If we were to start at the beginning, we would consider network routing. In a router (hardware or software), we would have rules and policies about how to handle network packets. What kinds of packets should get routed to specific host machines? What kinds of packets should get rejected? How do we determine which packets go to which host? What should the router do if packets get dropped?

These policies—along with the router’s facilities for storing and maintaining these policies—make up the network’s control plane. Speaking generally, the control plane is concerned with establishing policy. This is true in the context of networking as well as (which we’ll unpack below) in that of Kubernetes and service mesh.

Meanwhile, the data plane is everything else in the network architecture that carries out those policies. Packet switching, for example, evaluates packet addresses against the network policies and then does the work of getting those packets to the right destination. This work—the work of the data plane—is concerned with carrying out policy.

That gives us our general “lowest common denominator” understanding of these two terms, regardless of the context: The control plane is everything involved with establishing and enforcing policy, while the data plane is everything involved with carrying out that policy.

Now, let’s take a look at what that means in the contexts of Kubernetes and service mesh.

Unpacking the Control Plane

In Kubernetes

Kubernetes is a system for orchestrating containers. At its simplest (think: freshman CS project), a Kubernetes deployment would consist of a single cluster. Inside that cluster is a single node (worker machine), which contains a single pod, which runs a single container. That’s a lot of levels for a simple system.

A more complex system (think: enterprise SaaS with millions of daily active users) might have a dozen clusters, with each cluster in charge of hundreds of nodes spread out across the globe. Each node runs multiple replicas of pods, with each pod itself running several containers.

Can you imagine manually observing all of the pods and nodes in a system as complex as this? You would need to watch for a pod failing or a container stopping, and then react by spinning up a replica pod to replace it. You would need to re-route the network requests to the failing pod to start being sent to the replacement pod. If the load on all of the pods in a node started to hit capacity, perhaps you would need to spin up a new pod to handle the increase.

If you were to do all of these things, you would be doing the work of the Kubernetes control plane. In Kubernetes, the control plane is the set of components that “make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment’s replicas field is unsatisfied).”

Unpacking the Control Plane in Kubernetes

Kubernetes Components (original source: Kubernetes documentation)

Components of the Kubernetes control plane include the API server, etcd key value store, the scheduler, and various controllers. The part of the Kubernetes control plane users interact with most directly is the API server. Every time you run a command with kubectl, you’re interacting with Kubernetes via the API server to retrieve the current state of your cluster or to apply configurations (think: policy) to your system.

In service mesh

Within the service mesh context, the control plane also involves “establishing and enforcing policy.” How this plays out, though, is quite different from Kubernetes.

When you have several disparate services that all make up an application, communication between these services—often not located geographically near each other—requires managing some sort of network. When an application consists of not several, but instead several hundred, disparate services, that task of network management is a beast. We need to consider connectivity, security, service discovery, authorization, and more.

A service mesh abstracts away all of that network complexity via proxies with partnered containers that share resources: sidecars. Every single replica of every single service has its own sidecar proxy, which is in charge of handling outgoing and incoming requests.

Let’s say an application has Service A, which needs to send a gRPC request to Service B. In a service mesh, Service A would simply tell its sidecar proxy to take this gRPC request and get it to Service B, without caring (or knowing) anything about where Service B is in the network or in the world. The sidecar proxy, because it has been configured to understand where to find Service B and how to talk to Service B, simply goes and delivers the request to Service B’s sidecar proxy, who accepts it and then translates it for Service B’s consumption.

However, all of that complex configuration (think: policy) needed to be established somehow. This is where the service mesh control plane comes in. The control plane functions as a single source of truth with the most up-to-date configurations regarding all of the pieces in the service mesh. It’s the control plane’s job to get these configurations to the proxies, and it’s the proxy/data plane’s job to consume and execute accordingly.

Whether we’re working in Kubernetes or service meshes, what the control plane gives us is the ability to establish configurations and guarantee consistency on a massive scale.

And then, there’s the Data Plane

In contrast to the control plane, the data plane in each of these systems is quite simple to understand. With the control plane taking care of establishing policy, the data plane is only concerned with carrying out that policy.

Within the Kubernetes context, worker nodes (with their pods and containers) make up what we’ve defined as a data plane. Every node has an agent called a kubelet, who is in charge of communicating the desired state to the container engine that ultimately manages the containers. The kubelet gets its specs and configurations from the API server (control plane). Each node also has a kube-proxy, which manages network communication among pods and from other nodes. Nodes, along with all of their inherent components, carry out the configuration established by the control plane.

In a service mesh, we talked about sidecar proxies, which facilitate the communication between services in that mesh. In essence, sidecar proxies carry out the configuration established by the control plane; they make up the data plane.

What does this mean for you?

Before distributed infrastructure and platform services became ubiquitous, you may have found yourself in charge of setting up a server, locking down certain ports while opening up others, deploying an agent for application monitoring, and spinning up a load balancer for distributing traffic. At the time, all of those pieces—the server, the monitoring agent, the load balancer, etc.—made up the data plane. Then what—no, wait—who was the control plane? You were. Back in the day, you were in charge of establishing policies and configurations, and those pieces you set up and touched carried out your configuration.

Now, with applications that scale toward immense complexity, you can rest at ease because you no longer need to be the control plane. Your entire architecture can be configured with ease via the control plane. That might be the Kubernetes control plane, or if you’re working with a service mesh, a software package like Kuma. By delegating this herculean task to the control plane, you eliminate the risk of forgotten steps or inconsistent deployments. What’s more, you can be rigid and opinionated about your architecture—assured that the control plane will enforce policy—while still enjoying agility and flexibility in your application, the data plane.

Conclusion

The right solution should use this separation of concerns to mitigate the complexity of managing scalable application configurations. Along the way, they’re simplifying the architecture to reduce the cost of designing complex microservices. In short, they’re taking complex, distributed systems, and they’re making them easy to deploy, easy to manage, and low risk to use.