The Soft Side of APIs:
Making Better Decisions for Building a Technology Stack for APIs and Microservices
At Kong, I get a chance to discuss with various organizations their plans and projects to adopt microservices and expose them with APIs. During these discussions, I’ve started to recognize some patterns that appear with regularity – patterns that have less to do with technology than with people. Technologists and engineers like myself usually do not pay too much attention to the “softer” aspects of technology implementations. Considering these patterns will aid in making better decisions for building a technology stack for APIs and microservices.
Here are my observations:
Organizations Using X Heavily Lean Even More Into X Direction
Substitute X for anything you’d like: Java, .NET, Go, Node.JS, or perhaps even LISP. You may also substitute a technology platform under a software vendor’s brand. Take your pick: Amazon, IBM, Google, Oracle, Microsoft, SAP etc.
“We are an X shop.” The logic here is that using technology from the same vendor will be a superior path for a potential number of reasons, including cost, speed, risk, ease of doing business, compatibility, consistency or standardization. By contrast, non-incumbent candidate solutions are likely to cost more, be slower to implement, have more risk, etc.
While this may indeed have merit, it also has a few flaws:
- Big vendors acquire technology regularly, and it usually takes a while to integrate them. The quality of the integration is not always ideal. Sometimes these technologies die altogether or stagnate, becoming less useful over time and miss capabilities that are important.
- Heavily using a single vendor can be a path to having to write a blank check.
- Single technology dominance reduces flexibility and innovation.
- Any single technology will have its limitations – a fact often neglected by its proponents, or better expressed as “When you have a hammer, everything is a nail.”
One reason microservices are gaining traction is the freedom they offer different teams to deliver by using different platforms concurrently. Ruby? “Sure.” Node.JS? “Yes, in this use case.” Java? “Why not, the library is already available.” More choice means more flexibility, potentially faster delivery and more possibility for innovation.
The most impressive developers and engineers I have met were good with a variety of technologies and do not let popularity dictate their approach. I would wager that successful managers and architects pay attention to their environment and current technology but are not afraid to use different tooling if the situation requires it.
The Antidote: “Do not let a single technology, vendor or approach limit your options.”
At Kong, there is no right approach to how the technology should be deployed. We work with teams that may use VMs, bare metal, containers or a combination thereof. We work with teams that run on-premise, on the cloud or both. We also work with teams that are on the path of moving from one pattern to another. They appreciate the flexibility available to them and recognize its importance.
People, Process and Technology
People: Who. Process: How. Technology: What.
When I ask organizations I engage with about processes, I am occasionally encouraged by answers regarding their approach for building APIs and microservices. When I hear agile, CI/CD, DevOps, containers, etc., I know I am working with people who are working to stay ahead of technical debt by releasing often and making corrections on shorter time horizons.
Every once in a while, I am lucky to witness a change agent brought in to transform an organization. Without fail, their primary task is to change the mindset of how to do things. Change management is a challenge that cannot be taken lightly. Bless these brave souls.
Going to the cloud? Going to use containers? Going to Agile? You can get the tools, and you can hire or train. Pay extra attention to the process. Getting it right is critical. For those facing this challenge, the Managing Complex Change Model by Knoster is a good reference.
Given the indifferent nature of how Kong should be deployed and managed, along with its flexibility to adapt to various deployment scenarios, understanding the environments and intended pipelines is never a boring discussion. Invariably, the discussion of lifecycle management extends beyond the CI/CD details, such as whether using declarative or imperative configuration is suitable, as well as API onboarding, mocking, authoring and testing. This highlights the scope of change management an organization is undertaking and again stresses the importance of having a sound process, including supporting SLAs in line with the priority of your project. These are all capabilities that we cover at Kong, which go beyond classic API management.
Not Invented Here
This one comes up on occasion – the well-known DIY. This approach can work when the organization has the capacity to make it happen, and it is warranted. Some organizations have large technology teams and the necessary resources to build their own solutions, but being able to do something is not alone a good reason to do it. In general, it is rarely a good idea to try and reinvent the wheel. If an adequate choice of solutions exists that meet the technical needs with reasonable commercial terms and has a good chance of success, then building a solution from zero is rarely a good alternative.
I witnessed this first-hand, where a home-grown solution could not keep up with the demands of an organization. The internal customers of this solution “went rogue”, as we call it. I worked for the vendor that helped one of the rogue teams, which completed their projects with speed, quality and amazing success. The resentment and “Game of Thrones” antics that followed were hardly ideal.
People tend to have a strong loyalty to their work and will defend it fiercely. They will gladly put high effort and time on this, while being susceptible to more risk and cost – all to keep their work and their pride alive.
Kong has a formidable armory of plugins that address the usual aspects of API and microservices policy enforcement. I rarely see a situation where it is more advantageous for an organization to build and maintain numerous policy enforcement mechanism(s) for the sake of ownership. Done long enough, these investments become increasingly difficult to maintain and brittle. I am not aware of any developers who look forward to writing their own OIDC implementation, for example.
Don’t get me wrong. I am well aware of Google, Amazon and Facebook making their own hardware. I am aware of countless organizations that build custom systems. It is not a question of whether DIY can be a good approach – it is a matter of when.
Surely, these observations are not exhaustive, but they share something in common: the soft human element. Pay attention to this in your projects. Although it will not guarantee success, avoiding common pitfalls will help you reduce risk.