Kubernetes Architecture

Before you get started with Kubernetes it’s helpful to understand the architecture of the container orchestration platform.

Architecture and Components of Kubernetes

Kubernetes, or K8s as it is sometimes known, is an open-source platform for deploying containerized applications in distributed systems, where computing resources are provided by multiple separate machines that are connected over a network to form a cluster. Having been designed for very large systems with automation as a key requirement, Kubernetes is able to continuously monitor deployments and address failures automatically.

With Kubernetes, the application you’re deploying is packaged up using containers. Containers incorporate everything that the application needs in order to run. This ensures the separation of concerns and dependencies between the application and the host infrastructure, meaning that a containerized application can be deployed to any host with minimal configuration. Containers are ideal for deploying microservices as they facilitate releasing, scaling and updating the individual services. For more about the uses and benefits of Kubernetes, have a read of What is Kubernetes?

Let’s look in more detail at the components that make up a Kubernetes cluster.

Kubernetes Architecture: A Diagram

 

What’s in a Kubernetes Cluster?

A Kubernetes cluster consists of worker nodes that run the containerized applications and the machines hosting the control plane components. While the control plane can be installed on any machine in the cluster, it is typically kept separate from worker nodes running the data planes.

Control Plane

The control plane is the brains of the Kubernetes operation. It is responsible for deploying containers to worker nodes using pods, monitoring the health of nodes and pods, and addressing any failures.

The control plane is made up of multiple components, which can be installed on a single machine or distributed and replicated for high availability. 

  • kube-apiserver – The Kubernetes API is central to the control plane and allows cluster components and end-users to communicate. You can use this API to define cluster requirements, check the status of cluster elements, and interact with them. You can make calls to the API directly, interact with it via the Kubernetes command line interface (kubectl) or other tools, or use client libraries to write your own program calling this API. 
  • kube-controller-manager – The controller manager runs the various controller processes. Each controller is responsible for monitoring the status of a particular element of the cluster, such as nodes, replication sets, or endpoints (which join services and pods). Each controller can also update their component to the desired state if they do not currently match the specification.
  • kube-scheduler – The scheduler is responsible for assigning pods to nodes. New pods are assigned to nodes based on a number of factors, including resource requirements and other constraints that may have been applied.
  • etcd – The key-value store that holds all configuration data relating to the cluster and acts as the single source of truth.
  • cloud-controller-manager – The cloud controller manager runs controllers specific to the cloud environment in which your cluster is hosted. This allows you to link your cluster via your cloud provider’s API. On-premise clusters do not require the cloud controller manager. 

Cluster Node

A worker node is a physical or virtual machine running either Linux or Windows. This is where your containerized software runs. Kubernetes uses pods to hold containers. A pod usually only holds one container (although they can hold multiple if they are tightly coupled), so you can generally think of each individual pod as an instance of a particular microservice. Each node in the cluster can contain one or more pods.

The worker nodes require several Kubernetes components in order to receive instructions from the control plane and enable the application software to run:

  • kubelet – This is the agent that communicates with the control plane. It ensures containers are running and executing their instructions from the control plane.
  • kube-proxy – This is the network proxy that forwards requests to a particular service on a specific pod (which provides an instance of that service). 
  • Container runtime – The software that runs the containers. In addition to Docker, Containerd and CRI-O containers, support for new container runtimes can be added to Kubernetes using the Container Runtime Interface. 

A cluster must have at least 1 worker node and requires at least 3 to support high availability, but it will typically have many more such nodes. If you want to increase the capacity of your cluster, you will need to provision more worker nodes.

Kubernetes Infrastructure

Kubernetes can be run in a public cloud, private cloud, on-premise, or combinations thereof, using either physical or virtual machines. Worker nodes can run either Linux or Windows, whereas the control plane components only run on Linux. A physical cluster can be split into multiple virtual clusters using namespaces.

Kubernetes can also be run on a single computer such as a laptop for evaluation, development, and testing.

Deploying Kubernetes

One of the many advantages of Kubernetes is the level of flexibility it offers. Once you’ve set up a Kubernetes cluster it is up to you to decide how you want to deploy your application. You define the desired state of the cluster via the Kubernetes API including constraints and requirements, and the components work to make it so. To learn more about deploying your application with Kubernetes, see What is a Kubernetes Deployment?

Conclusion

Kubernetes was designed for automated management of containers in a distributed system and supports high availability and dynamic scaling, making it ideal for deploying microservices and implementing DevOps practices.

FAQs

What is Kubernetes architecture?

Kubernetes architecture refers to the components required to deploy Kubernetes and the infrastructure on which they run. There are two main parts to the architecture: the worker nodes, which are the computers that run the containerized applications, and the control plane, which is responsible for deploying containers to worker nodes and managing failures.

What are the different components of Kubernetes architecture?

The control plane includes the API server, scheduler, a key-value data store, and controllers. These components allow you to configure your cluster and ensure that configuration is applied to the nodes. Worker nodes contain pods that hold your containers. They also include an agent to implement instructions from the control plane, a network proxy to enable communication with pods and the container runtime.

What are clusters in Kubernetes?

A cluster is a group of either physical or virtual machines that are connected over a network so that workloads can be shared between them.

What are nodes in Kubernetes?

A node is an individual machine in a cluster. Worker nodes host the pods that hold containers. A single node can host multiple pods (and therefore containers), depending on the available memory and CPU resource.

Want to learn more?

Request a demo to talk to our experts to answer your questions and explore your needs.