By on July 3, 2018

Kubernetes: How Did We Get Here?

This is the first of two blogs examining the history and future of Kubernetes. For more information, check out our e-book Kubernetes: The Future of Infrastructure.

 

Kubernetes is a hot topic right now, and with good reason – it represents a large component of the future of software development. If you’re reading this, you may already be familiar with some of the benefits promised by Kubernetes, such as simplified management, enhances scalability, and increased visibility, among others.  However, it’s hard to appreciate the gravity of these advancements without the right context. To properly frame the benefits of Kubernetes, we’ll ask the two questions inherent with every technological advancement – “How did we get here?” and “Where are we going?”. In this post, we’ll focus on the first of these by examining the IT developments that led us to where we are today, and that laid the groundwork for Kubernetes.

Better than Baremetal

Once upon a time, if you wanted to grow your infrastructure to support software development, you had to purchase additional servers and physically scale it to meet your application needs. This was less than ideal. It was resource- and time-intensive to build out, and then you needed to maintain it for performance and availability, which layered on even more time and expense. Fortunately, a company called VMWare introduced the world to virtualization (virtual machines), bringing with it increased flexibility, scalability, reliability, and overall performance while lowering expenses. This advancement led to an explosion of innovation in software development. With infrastructure presenting less of a bottleneck, developing software became cheaper and faster. However, just as with bare metal in the past, the demands of software development would begin to outstrip what virtual machines (VMs) could offer.

Evolving App Development

There are many parallels between the rise of virtual machines and the rise of containers. At their simplest, Containers build upon VMs in the same way VMs built on bare metal – that is that they allow us to get the most out of our infrastructure. Containers, however, accomplish this at a much larger scale. With containers, we can leverage more processes on each virtual machine to increase its efficiency. We can also run any type of workload within a container because it is isolated, ensuring that each workload is protected from the others. This resource efficiency comes with obvious benefits. It brings a more efficient approach to software development, increasing engineering agility by reducing wasted resources and empowering teams to build and share code more rapidly in the form of microservices. On top of this, containerization improves scalability through a more lightweight and resource-efficient approach. 

Ease of use is another important benefit of containers. Similar to how virtual machines were easier to create, scale, and manage compared to physical hardware, containers make it even easier to build software because they can start up in a few seconds. Containers enable us to run a lightweight, isolated process on top of our existing virtual machine, letting us quickly and easily scale without getting bogged down with DevOps busy work.  Similar to VM orchestration tools, container orchestration presents us with an opportunity to further enable and enhance the benefits of containers.

The Case for Container Orchestration

Container adoption has necessitated container orchestration in the same way that adoption of virtualization forced companies to use tools to launch, monitor, create, and destroy their VMs. Like VMs, Containers must be monitored and orchestrated to ensure they are working properly. The manual method of accomplishing this would run the risk of losing many of the primary benefits containers offer. For instance, if we wanted to run multiple containers across multiple servers and virtual machines — necessary for using microservices — handling all of the moving parts would require a huge DevOps effort. These many moving pieces require us to answer several questions, such as when to start the right containers, how to ensure the containers can talk to each other, what the storage considerations are, and how to ensure high availability across our infrastructure? Fortunately, Tools like Kubernetes accomplish just that, allowing developers to better track, schedule and operationalize various containers at scale. This allows us to realize more of the value of containers and microservices, and help open the door to transforming the way that we develop, maintain, and improve software. 

So, you want to know how Kubernetes works, how it will change infrastructure, and how it can help you? Check back with us next week where we’ll dive into “Where we’re going” with Kubernetes.