How Kubernetes Is Modernizing the Microservices Architecture
In this three-part blog series, we examine the critical role Kubernetes plays in shaping the future of infrastructure, including the rise of containers and Kubernetes.
The first in the series covers Next-Generation Application Development.
The second covers the Next Frontier: Container Orchestration.
And the third covers How Kubernetes Gets Work Done.
The goal is to understand the organizational and technical advantages that Kubernetes contributes to microservice-based applications and methods to improve deploying, scaling and managing containerized applications.
Next-Generation Application Development
The world of IT infrastructure has evolved dramatically since the days of developing and running applications on bare metal hardware. In the early 2000s, VMware broke the mold by popularizing virtualization—a software abstraction (virtual machine, or VM) that looks and behaves like dedicated hardware. The primary advantages of using virtualized infrastructure include increased flexibility, scalability, reliability and performance, in addition to lower capital and operational expenses.
The rise of cloud computing and big data has fueled the adoption of another technology designed to improve flexibility and convenience at scale, containers, which facilitated the development of microservices.
Like Virtual Machines, but Better
Containers are an efficient, modern way to package software. A container includes the application code itself and dependencies such as runtime, configuration, system tools and libraries. Because containers can package code with dependencies and run across various infrastructure types, they free developers from worrying about where their code will run in production.
There are many parallels between the rise of virtual machines and the rise of containers. Containerization has transformed how companies deliver applications and in the same way virtualization did in the early and mid-2000s.
The Role of Containers
Containers are another lighter-weight way to isolate applications. While containers have their operating system embedded, they can rely on the server’s operating system rather than packaging each application the way virtual machines do. This makes containers a fraction of the size of virtual machines and more manageable for developers and operators. It allows operators to fit even more application instances on the same physical servers while preventing the workload in one container from interfering with others on the same server.
Since containers allow developers to package their code and dependencies, they facilitate a DevOps approach to software development. Developers no longer rely on operators to provision machines or virtual machines that can support their services; they can package their code with everything it needs, without any help. This speeds up development by reducing back-and-forth between teams and reducing wait time. Because containers are self-contained, they are easily shared via common repositories for testing and collaboration.
Ease of use is another core benefit of containers. Just as virtual machines were easier to create, scale and manage than physical hardware, containers make it even easier to build software because they can start up in a few seconds and can easily run on a laptop.
Gone are the days of running the heavy load of processes required by a full virtual machine. Instead, containers let us run lightweight, isolated processes in any environment, including locally, in the cloud, or on on-premises servers or virtual machines. This makes them easy to develop and quick to scale without the overhead of DevOps busy work.
In the next part of this series, we discuss the next frontier in microservice architecture: Container Orchestration.
Take a technical deep dive into how Kubernetes interacts within your environment.