What is Kubernetes?
Kubernetes, or K8s as it’s known for short, and container orchestration are changing the landscape of software development and deployment. But what exactly does Kubernetes do? In this article we’ll explain the basics and discuss the benefits that Kubernetes can offer you.
An Introduction to Using Kubernetes
To understand Kubernetes you first need to know a bit about containers. Containers provide a way to host applications on servers more efficiently and reliably than using virtual machines (VMs) or hosting directly on the physical machine. In a world where users expect systems to run with minimal downtime and increasingly complex applications require more and more computing resources and dependencies, containers make it easier to package up and deploy the underlying software across distributed systems.
How did we get here? Hosting applications directly on a physical machine runs the risk that if one application fails, it may take down all other applications running on the machine. One solution is to run a single application per server, but that’s hugely inefficient. Virtual machines improved on this by allowing multiple applications to be run on the same server, but isolated from each other so that the failure of one application doesn’t affect the others.
While VMs enabled more efficient use of hardware, they still carried a fair bit of overhead. Containers improved on VMs by being lighter (they don’t incorporate an OS) and easily portable, so they can be deployed to different physical or virtual infrastructures easily. Whereas you might only fit a handful of VMs on a single server, you can host dozens of containers. By making more efficient use of resources, containers are ideal for cloud-hosted infrastructure, where cost is a factor of CPUs and time-in-use, but they can also be used on locally hosted servers.
While being able to host far more containers than VMs on the same kit is a benefit in terms of hardware cost, it also carries a potential drawback as the number of containers deployed in a live system may number in the hundreds, if not thousands. Manual management clearly isn’t realistic, hence the need for a container orchestration tool. That’s where Kubernetes comes in.
Kubernetes is an open-source platform that allows you to control the deployment, management and scaling of containers automatically, thereby realizing the benefits of both distributed computing and microservice architectures.
How Does Kubernetes Work?
Kubernetes manages containers hosted on multiple different machines that are networked together to form a cluster. Each machine (whether physical or virtual) is a node in the cluster. Worker nodes host containers in pods managed by the control plane. The control plane is usually hosted on a separate machine or cluster of machines.
The control plane provides the Kubernetes API, which you can either call directly or via the command-line interface (kubectl), or even via another program to configure the cluster. Kubernetes then takes care of deploying containers to worker nodes, ensuring that they are packed efficiently, monitoring their health and replacing any failed or unresponsive pods automatically. Unlike when managing physical servers or VMs, you generally don’t need to interact with the nodes in a Kubernetes cluster. Kubernetes avoids tight coupling between applications and the machines they run on, treating pods as ephemeral and therefore disposable objects.
To learn more about how Kubernetes works, take a look at the Kubernetes Architecture.
What Can You Do With Kubernetes?
Originally developed by Google engineers to manage large clusters, Kubernetes is designed for scalability and reliability. For data-heavy organizations that need to respond rapidly to sudden peaks in demand, like the European Organization for Nuclear Research (CERN), Kubernetes makes it possible to scale systems up quickly and automatically as usage increases, and take machines offline again once they are no longer needed.
If you’re building a microservices-based application, whether from the outset or as a migration from an existing monolith, using containers makes it easier to deploy the individual services independently while fitting more services onto an individual server. By managing those containers automatically using Kubernetes, companies like Squarespace benefit from improved resiliency as the platform automatically detects and addresses failures to ensure an uninterrupted service.
The benefits of container orchestration are not limited to live systems. Using Kubernetes to automatically deploy containers and scale compute resources in a CI/CD pipeline can provide huge savings, both in terms of cost of cloud-hosted infrastructure and developer time. Rather than manually provisioning pre-production environments or waiting for resources to be available in order to run tests on the latest build, development teams at Pinterest can now get rapid feedback and deliver their changes to production faster.
Advantages of Kubernetes
Kubernetes is ideal for managing deployment and scaling of containerized applications. To deploy multiple instances of a particular service you can either define the number of replicas for a pod or enable autoscaling and have Kubernetes scale up and down automatically based on demand.
In addition to distributing your containers across multiple hosts and automatically replacing any failed pods, the Kubernetes control plane itself can be configured for high availability. Control plane hosts can either contain both the data storage and control components or separate these out for even greater resiliency.
Containers provide a layer of abstraction between the application and the infrastructure that they run on, and Kubernetes leverages this to maximum effect. By treating pods and nodes as replaceable objects and constantly monitoring their health, Kubernetes can re-deploy automatically when a failure occurs.
Kubernetes is cloud-agnostic and can also be run on-premise, avoiding any vendor lock-in. Having proven to be the tool of choice for container orchestration, Kubernetes is supported by all major cloud vendors, many of which also offer managed Kubernetes services.
Terminology To Be Familiar With
If you’re new to Kubernetes the terminology can be off-putting. Here are some of the basics to get started with:
Cluster – A group of physical of virtual machines (nodes) running containerized applications.
Control plane – The brains of the operation (formerly known as the master). The control plane provides the components to deploy and manage containers across all worker nodes.
Node – A physical or virtual machine in a cluster. Each worker node includes a set of components that enable pods to run.
Pod – The smallest object that you can deploy with Kubernetes. A pod acts as a wrapper around a container. Each pod typically holds a single container but can contain multiple if services are tightly coupled.
Service – A set of pods running the same application or microservice can be grouped together to form a service. The service provides an abstraction layer and allows pods to be replaced easily. Pods in a service can be located on different nodes.
Ingress controller – A component that applies the ingress rules, which define how external traffic is routed to services within the cluster.
If you’re using or considering using containers to make building, scaling and deploying your microservice-based application more efficient, it’s worth exploring how Kubernetes can help you take the benefits of containerization to the next level.
What is Kubernetes?
Kubernetes is an open-source container orchestration tool that allows you to automate deployment, management and scalability of containers.
How does Kubernetes work?
Kubernetes is installed on each machine (node) in your cluster and managed from the control plane. You use the control plane to instruct Kubernetes on how you want your application to be deployed and Kubernetes works to make it so, continuously monitoring the status of each object to ensure it matches the spec.
What’s the difference between Kubernetes and Docker?
Docker is the software that enables containers, which allow multiple applications to run independently on the same machine. Kubernetes is software for deploying and managing the containers within a cluster of physical or virtual machines. Kubernetes supports several container runtimes, including Docker.
Why is Kubernetes called K8s?
The 8 in K8s just substitutes the middle eight letters in an otherwise tricky-to-type word.
Want to learn more?
Request a demo to talk to our experts to answer your questions and explore your needs.