What is Kubernetes Deployment?
Kubernetes allows you to deploy new services and roll out updates to existing services without downtime while keeping control of when your changes will become visible to users.
An Overview of a Kubernetes Deployment
Kubernetes (or K8s if you prefer) is a container orchestration platform that automates the process of deploying, scaling and managing containers. It is ideal for microservice architectures, where services are deployed and scaled independently.
With Kubernetes your containerized application is deployed to the servers in the cluster (the worker nodes) using pods. Each pod is an instance of your application or of a particular microservice that forms part of your app. By grouping pods into services (not to be confused with microservices!) and exposing only the service, Kubernetes makes pods interchangeable and ensures they can be replaced automatically.
A deployment is a description of the desired state of the pods, which the Kubernetes controllers then work to make a reality. You can use deployments to roll out a new application or microservice or update an existing one.
Why Does Deployment Matter?
In the past, releasing a change to an application was a big deal, potentially involving several hours of downtime while servers were taken offline, updated and re-deployed, followed by many more hours of nervous watching and waiting to see if everything was still working as expected. The experience for end users was poor, ranging from several hours of a service being unavailable if things went well, to downtime followed by further interruptions and a potentially buggy system if things went badly.
For developers the arduous release process and the need to give plenty of notice (or further degrade the user experience) was a deterrent to releasing small, regular changes which could have provided valuable feedback from users, while the effort required to script each release in order to make it repeatable meant the best practice was often an aspiration rather than reality.
Kubernetes changes all of that by leveraging cluster resources to avoid any downtime while automatically monitoring the health of servers hosting the application (the worker nodes and the pods they contain) and either rolling back or replacing instances as needed, without manual intervention. Because each deployment is recorded as a configuration in a YAML file, it’s versioned and repeatable, so the same steps can be trialed in pre-production environments before going live.
Kubernetes is particularly well suited to microservice architectures, as each microservice can be deployed, updated and scaled independently by addressing the pods associated with it. The various deployment strategies provide teams with options for testing the water before replacing all instances of the service, or for rolling back if something goes wrong. Deployments also make it easy to scale individual services independently of one another. As automated deployments are quicker and more reliable, it’s much easier for developers to roll out regular updates to a service.
There are multiple strategies for deploying your application to your Kubernetes cluster, each with different advantages. The best one to use will depend on the situation.
Ramped – A ramped or rolling deployment is the default deployment strategy with Kubernetes. New pods are brought online and traffic is directed to them to ensure they are working as expected before old pods are removed. This is particularly useful for stateful applications, as Kubernetes keeps old pods alive for a grace period after redirecting traffic to the new pod to allow any open transactions to terminate.
Recreate – Unlike the other deployment strategies, a recreate strategy does involve downtime, as all pods are terminated before new pods are brought online. This avoids having two versions of a container running at the same time.
Blue/Green – With a blue/green deployment new pods are deployed alongside the existing pods. The new pods are tested before redirecting traffic to them. Although this strategy requires double the resources, it makes it much easier to roll back in the event of a problem arising with the new deployment.
Canary – In a canary deployment, a subset of users acts as the proverbial canary in the coal mine. A small number of pods are updated to the new version and some traffic is routed to them. If an error occurs, the damage is limited and the change can be rolled back. A canary deployment is useful if you’re not confident about the change being released and want to test it in production.
A/B Testing – A/B testing uses the same approach as a canary deployment but the purpose is to inform business decisions based on usage data from the two different versions that have been deployed. Typically there will be a single difference in functionality, and various KPIs will be tracked to see which version produces better results.
Kubernetes Deployment Tools & Services
While you can implement these deployment strategies natively by updating YAML files and applying them to the cluster using the command line interface, kubectl, there are various tools and services that make the process easier.
Cloud providers such as AWS and Azure provide managed services that take care of the underlying infrastructure so that deployments can scale automatically. As canary deployments and A/B testing can be complex to implement using only Kubernetes native functionality, third party tools, such as Istio, have grown up to fill the gaps.
Kubernetes provides a range of options for rolling out changes to containerized applications automatically. All deployments are versioned and automated, making them faster and more reliable than manual releases. If you’re developing microservices, Kubernetes enables you to deliver updates to individual services rapidly and frequently, while also allowing you to scale those services independently.
What is a Kubernetes deployment?
A deployment is used to roll out a new containerized application or update an existing application to a new container version.
How does a Kubernetes deployment work?
To deploy a new version of a container, you create a new configuration file that specifies the container image and replication requirements and apply it the cluster. Kubernetes then implements the changes required to match the specification.
How do I access Kubernetes deployment?
Kubernetes deployments can be applied using the command line interface, kubectl, or from a native or third-party tool, such as the Kubernetes dashboard.
How do I automate Kubernetes deployment?
Automation is at the core of Kubernetes. Once you have changed the specification for the cluster, Kubernetes works to apply the changes. Kubernetes continuously monitors the cluster status, addressing failures as soon as they occur.
Want to learn more?
Request a demo to talk to our experts to answer your questions and explore your needs.