“DevOps” merges Development and Operations team functions through practices and tooling, all the while making continuous improvements to applications. Teams that adopt DevOps tools, culture and practices perform better and build faster.
Let’s walk through each stage of DevOps and the popular DevOps tools you may want to consider in 2022.
Stages of the DevOps Cycle
Ideally, the DevOps lifecycle is a continuous cycle of the following stages:
In this stage, DevOps teams ideate and define the capabilities of what they are building and how they might build it.
Project Planning Tools
The industry is not short on project and issue management tools. Which one is best? The answer is: The best one is the one that works for your team, as long as you use it consistently. Common choices include Jira, Shortcut and Asana. Also, issue management tools built into version control tools, like GitHub and GitLab (more on that later).
API Planning Tools
API design is an essential first step when building applications. Most teams use an API spec like Swagger to prototype, collaborate and test APIs.
If you’re not sure where to start, then look at the tools created by Smartbear, one of the main companies behind the Swagger spec. Smartbear offers an editor for API design and documentation and a UI tool for rendering the spec. Smartbear has a hosted version, or you can run the code yourself.
With a focus on organization and collaboration through the API design process, Insomnia supports other protocols in addition to the typical REST, including gRPC and GraphQL. Insomnia recently added Inso, a CLI tool that integrates specifications into DevOps pipelines.
DevOps engineers collaborate on the code behind ideas and test and review that code in this stage.
Version Control Tools
Version control is the keystone of DevOps processes. It’s critical for managing, applying and reverting code changes.
Git is the most popular option. It allows distributed collaborators to work on individual versions of a codebase. Then, merge changes back into the main version.
Automation is core to the DevOps method. Yet, a team’s trust in automation hinges on their confidence in the quality of their code. Testing code is one way to build this confidence.
There are different types of tests, with each type designed to target various aspects of code and an application. A developer either runs the tests manually or configures some event to trigger test automation.
Unit tests check individual units of code functionality using control data to ensure that they function as designed. Typically, programming languages have a choice of unit testing frameworks. Programmers use these to build tests alongside code.
End-to-end (E2E) testing (or “functional” testing) replicates application workflows from start to finish. It can ensure that applications work so that users use them, rather than development teams think they do. E2E tools include:
- Selenium (and newer tools that build upon Selenium, such as Cypress)
- Cucumber that takes control of a browser to simulate flows
- Appium that works with mobile platforms
Integration testing can seem similar to E2E testing. However, integration testing looks at the application from a different perspective to see if there is a data flow, database tables change, an email send, etc.
In this stage, DevOps teams deploy code to testing and production environments consistently and automatically.
Before introducing CI/CD pipelines, software test and build processes were primarily manual. This all changed with the adoption of continuous integration (CI) for building code and running tests, continuous delivery (CD) for deploying code to environments, and continuous deployment (also CD; yes, it’s confusing) for releasing code to users.
Several DevOps automation tools for CI/CD exist, but popular options include Jenkins, Travis and CircleCI. They all do similar things but support different parts of the cycle and vary between self-hosted or cloud-hosted, configuration method and extension support.
Fundamentally, all other steps in a DevOps workflow deliver a new feature or fix to users. Language-specific tools build code (or package it to be more performant) before the code runs.
Infrastructure Management Tools
Containers are one of many infrastructure-as-code (IaC) related technologies that allow you to package a definition of environment, dependencies, application components and configuration as code, with all the benefits that it brings. Docker is the most well-known container technology, but it’s not the only option.
Taking the IaC concept further is a suite of tools that abstract and manage the infrastructure around an application also as code. IaC tools fall loosely into two categories: those representing the configuration of services on already provisioned infrastructure and those that also provision new infrastructure. Newer tools generally fall into the latter category. These include:
- Terraform: describes the cloud-agnostic infrastructure and third-party services
- Chef and Puppet: describe the steps to take to provision environment configuration
- Ansible: describes how components and services relate to one another
In this stage, customers access the applications. Then, DevOps teams monitor the applications to identify any issues.
Tools for Running Code
Kubernetes has become the de-facto tool for orchestrating container-based architecture. Kubernetes lets teams describe an ideal application infrastructure. Then, it handles rollouts and rollbacks for you. Kubernetes also handles secrets and configuration management, service discovery, load balancing and has an extensibility framework.
Tools for Optimizing Running Applications
When managing a mixed infrastructure, more tools can enhance Kubernetes. For example, services that aren’t running in containers or are running in containers elsewhere.
For example, a service mesh enhances some of the traffic and security management offered by Kubernetes. Open source offerings include Istio and Kuma, while Kong Mesh is an enterprise-grade service mesh built on top of Kuma. Service mesh features include routing traffic to different services to match regions or testing groups, rate limiting, access control and multi-cloud support. Often running alongside a service mesh are L7 proxies such as Kong proxy, which also handle routing and balancing of incoming traffic.
Application Monitoring Tools
There is little point in building an application if you don’t have insights into how it is running. When development teams built large monolithic applications, there was generally a single (or limited) source of information to monitor. The proliferation of microservices and containers has made it harder to monitor the performance of a single application, but a plethora of new tools have emerged to help.
Observability is a catch-all term that can now cover everything from more traditional application performance monitoring (APM) services to eBPF, time-series metrics data and tracing. The main difference between the approaches is the perspective, from an application looking down to underlying infrastructure or underlying infrastructure looking up to the application.
Early in the monitoring race was the “ELK” stack from Elastic. The stack consists of Elasticsearch for searching logs, Logstash for logging and Kibana for visualizing log data. It’s a multi-purpose tool but one often used for monitoring application logs.
APM services that continuously monitor and track application performance and availability like New Relic, AppDynamics and Datadog were (and still are) the most popular observability solutions. Still, newer contenders are fast snapping at their heels.
While also a project, Prometheus morphed into somewhat of a standard for metrics data and influenced the development of Open Metrics. Bundled into Prometheus is Grafana, which has become somewhat of a standard for graphing and visualizing metrics.
For eBPF, which leverages features built into the Linux kernel to surface metrics as close to the source as efficiently as possible, ebpf.io is a great place to start.
Tracing takes observability metrics one step further. First, it tracks a request or process across many services or components. Then, it shows the time taken at each step and any connected metadata. Popular tools include Zipkin, OpenTracing and JAEGER.
Monitoring issues and anomalies is a waste of time if no one knows about them. Some observability options provide alerting. Still, they don’t always provide an option to tell relevant team members about the alert. Services such as PagerDuty, Opsgenie and Splunk sit on top to alert relevant teams and individuals on the right channel. Those could include an internal messaging app for low priority issues or SMS or phone calls for high priority issues.
If you’re already using a service mesh, they often bundle observability tools with them. Kuma ships with Prometheus and Grafana, while Istio exports metrics and traces in standard formats.
Putting the Pieces Together
That was a lot of information! How do the stages and tools fit together to create a DevOps workflow?
Take the example of an eCommerce store that sells hats—lots of hats. They are selling so many hats that their development team adds new features every week to make the hat-buying experience as customizable and smooth as possible for customers.
The team started by describing the API behind the application and the other APIs it interacts with. This includes:
- API endpoints for listing products and individual products and endpoints for creating, updating and deleting a product
- API endpoints for creating, updating, deleting and listing orders
- External API endpoint calls to make and confirm payments with a provider
- Administration endpoints for viewing orders and products
After using Insomnia for API design, collaboration and testing, the team creates the API endpoints in a Node.js Express application with a React frontend. Different team members build the code in two different repositories, using Git for version control hosted with GitLab.
Whenever a developer wants to change the codebase, they make a merge request to the repository. This triggers the following test jobs run by CircleCI:
- Express unit testing with mocha or jest
- React E2E testing with Cypress
- API validation testing with Inso
- If tests fail, send a message to the #testing Slack channel
- If tests pass, use webpack to make an optimized production build
- Deploy the new build to production servers
The team has Kuma as a service mesh in front of the application, which helps route traffic to services. They could add authentication in the future, but right now, they use it to monitor endpoint traffic.
Through the years, the culture and DevOps practices have remained mostly unchanged. The available tools for DevOps, however, have exploded in number. As we’ve walked through the different stages of the DevOps cycle, looking at popular tools used along the way, perhaps you’ve found some new helpful tools to add to your toolbelt in the new year!
Let us know @thekonginc on Twitter which tools you love!