See what makes Kong the fastest, most-adopted API gateway
Check out the latest Kong feature releases and updates
Single platform for SaaS end-to-end connectivity
Enterprise service mesh based on Kuma and Envoy
Collaborative API design platform
How to Scale High-Performance APIs and Microservices
Call for speakers & sponsors, Kong API Summit 2023!
6 MIN READ
In the last blog post, we discussed the need for both speed and quality for your API delivery and how APIOps can help achieve both.
In this part of our blog post series, we’ll walk through what the API lifecycle looks like when following APIOps. We’re still following the best practice we’ve established in the industry over the years, but what you’re going to see is that the processes we follow at each step of the API lifecycle – and between each step – have changed. We’re now incorporating key parts of DevOps and GitOps to bring modern automation and continuous testing to our API pipelines.
At design time, we use a design environment like Insomnia to easily create the API spec, which is typically a Swagger or OAS document. We also create a test suite for that spec. Here we should check for several things – do we get the responses we expect in certain conditions, does a response follow the expected format, etc.
What’s critical here is that the tooling we use to create the spec and the tests gives us instant validation. That’s linting of the spec against best practices, the ability to run those tests locally and validate what you’re building. As the designer of the API, you need to have self-serve tooling that makes it easy to do the right thing from the beginning.
When you’ve created that spec and validated it locally, you then push it into Git – or whichever version control system you use – by raising a pull request for this new service.
This triggers a governance checkpoint embedded in our pipeline. Before any time is spent building the API, we need to be sure that what’s going to be built follows our company standards and is aligned with everything else in the ecosystem. Using a CLI like inso, we automatically invoke the API tests built for the spec and any other governance checks we want to include at this stage of the pipeline.
This is an automated check, embedded in the pipeline and triggered by default when a spec is pushed into Git, which means there’s 100% coverage of these checks for every API that’s being designed, anywhere across the organization.
So we’re now consistently and instantly catching any errors or deviations from our standards as close to the beginning of the pipeline as possible. This means they’re much faster and cheaper to remediate, and we’re reducing the number of issues that end up getting deployed into production.
If the spec fails any of those tests, it gets automatically pushed back for more work in the design phase. If all the tests pass, then we have a validated spec and can now progress onto the build phase.
We build our API in the normal, best practice way: We use the spec as the contract to tell us what the API needs to do and what the interface needs to look like, and we use the tests as we go to validate that the API we’re building meets the spec. Of course, you should be able to build this API in whatever tool or language is best for the use case, so long as it fulfills the spec.
As before, when the developer commits their code saying it’s ready for deployment, a series of tests are triggered.
We automatically execute the API tests that we built at design time again to make sure the API still meets our best practice and that the interface that’s been built is what was specified in the spec. These tests are actually our unit tests and will also make sure that the implementation of our API functions how it should. There may well be additional tests that we also want to carry out at this stage, still automatically.
If any of the tests fail, we know immediately. We do not deploy the API; we go back and make the necessary changes until our implementation is how we need it. And we can keep executing these tests for instant, continuous validation of what we’re doing.
When those tests pass, we progress forward to deployment.
Now this is where we start to see more of a GitOps approach because when this round of automated tests has been passed, we then automatically generate a declarative configuration file for this API.
Source: Imperative vs Declarative Programming
The beauty here is that with APIOps, we don’t even need to write that simple declarative config file ourselves: It’s a YAML file, automatically generated from the API spec. So we can have it instantly, on demand, whenever we need it. And because it’s generated from the spec, it’ll be completely accurate and 100% consistent with that spec, so we know that nothing in the spec will be forgotten about and there’s no chance of human error in that deployment process.
The declarative configuration, having been automatically generated as part of the pipeline, instructs the API platform what it needs to look like once that API has been deployed, and the platform goes off and configures itself. So we end up with our API registered in the platform, with the various security, governance and operational plugins for that API configured as well.
It’s also worth noting that we store this declarative config file in version control, along with the spec, tests and the implementation of that API. This means we have a complete, searchable and auditable history of every deployment we’ve made. If ever there’s a problem once we’ve deployed the API, then we can very easily roll back to a previous state – so it’s not just that we’ve made deployment easier but roll backs as well.
Of course, once we’ve deployed the API, we need to validate it performs how we expect and check that we haven’t caused any errors. Now that we’re in an environment where other APIs and code are deployed, we should do some integration testing, security testing, performance testing, etc. depending on where you are in your software development lifecycle.
So we’ll run that series of release checks before we actually publish this API and make it discoverable. These checks should also all be automated, although you may want a final sign-off as a manual step before you push that Publish button.
When you’re ready to publish that API, registering it in the Portal, enabling self-serve access and adding the spec for that API should be an automated process as well. In addition to ensuring that you can constantly deliver at pace, the only way to ensure every API is discoverable and documented in the Portal is to automate it.
What we’ve built up as we’ve gone through the API lifecycle is an inventory of assets that enable us to operate this API on an ongoing basis in an almost entirely self-sufficient way.
If we need to scale out the API to handle higher throughput, that can be completely automated using the declarative configuration. Since this is version controlled, we’ll see a completely repeatable, identical deployment to before.
The overall result here when our API lifecycles follow APIOps is that the continuous automated testing and deployment means we catch and resolve errors early, speeding up deployment and raising quality. This is the only way to operate continuously at scale, without sacrificing productivity because of manual checkpoints or lowering quality because of the need for speed.
Share Post
Learn how to make your API strategy a competitive advantage.