See what makes Kong the fastest, most-adopted API gateway
Check out the latest Kong feature releases and updates
Single platform for SaaS end-to-end connectivity
Enterprise service mesh based on Kuma and Envoy
Collaborative API design platform
How to Scale High-Performance APIs and Microservices
Call for speakers & sponsors, Kong API Summit 2023!
5 MIN READ
In the current microservices DevOps environment, there are tough new and evolving challenges for developers and teams to consider on top of the more traditional ones. From worsening versions of already common threats to new-generation evolving threats, new perspectives are required on securing microservices. These new perspectives may not be intuitive for many otherwise sophisticated DevOps and data teams.
As thoroughly detailed previously in Machine Learning & AI for Microservices Security, when dealing with microservices, we’re ultimately talking about more code – way more code. More lines of code means a greater risk of introducing vulnerabilities. Microservices entail much more complexity when it comes to security. So how can companies, their IT teams, and development teams all stay on top of such an amorphous threat ecosystem where there are way more places for bugs to hide, as well as more ways to conceal indicators of attack and compromise?
Consider an API Gateway, such as Kong Gateway.
As Chris Richardson has noted, microservices could simply be described as an architectural style. But whether that description makes it a simpler world or not, Richardson concurs with most of the industry thinking that complexity is the distinguishing hallmark of the microservices architectural model. All that complexity exposes more of your DevOps environment in many ways. Let’s examine the challenges this situation presents in each of their diverse masks.
Since microservices communicate with each other to such an extent via APIs independent of both machine architecture and even programming language, this creates more attack vectors. More interactive services also increase the total possible points of critical failure. This requires keeping DevOps teams and security teams one step ahead of such microservice interruptions. When a microservice is breaking down, it is not operable, and when microservices are not operable, it’s not as easy to see how they are contributing to security threats, or whether such an issue is part of an ongoing attack in action.
Transitioning from a monolithic DevOps environment to a microservices environment is an unwieldy trade-off at best, but it is becoming a necessary one to stay competitive and support growth. As Jan Stenbreg has commented, the need for varying lifecycles for APIs is at the crux of the move from monolithic to microservices for many organizations in 2016 and that has continued on through today.
Since it becomes much more difficult to maintain a microservices setup than a monolithic one, each microservice setup may evolve from a wide variety of frameworks and coding languages. The bifurcating complexities of stack support will influence decisions each time new microservices are added into the mix. Each additional language used to create each new microservice creates an impact on security via ensuring stability with the existing setup.
The new DevOps microservices ecosystem is spread out–way out. Because microservices are distributed, stateless, and therefore necessarily independent, there will be more logs. The challenge here in this is that more logs threaten to camouflage issues as they pop up. With microservices running from multiple hosts, it becomes necessary to send logs across all of those hosts to a single, external, and centralized location. For microservices security to be effective, user logging needs to correlate events across multiple, potentially differing platforms, which requires a higher viewpoint to observe from, independent from any single API or service.
Monitoring presents a new problem of degree with microservices. As new services are piled on the system, maintaining and configuring monitoring for them all is itself a kind of a new challenge. Automation will be required just to support monitoring all those changes at scale for affected services. Load balancing is part of the security awareness that monitoring must account for, not just attacks and subtle intrusions.
Applications are in one way or another the bread and butter of microservices teams. API security doesn’t simply go away with microservices security in place. With each additional new service, there emerges the same old challenge of maintenance and the configuration of API monitoring from the API team perspective. If application monitoring is not end-to-end, as Jonah Kowall concurs, it becomes too taxing to isolate or address issues. Without automation, it is less likely teams will be able to monitor changes and threats at scale for all services exposed.
The method of using request headers to allow services to communicate data is common. This can simplify the number of requests made. But when a large number of services are utilizing this method, team coordination will need to increase and become efficient and itself also must become simplified. Furthermore, with larger numbers of requests, developers will have to be able to comprehend the timeframe for processing such requests. The serialization and deserialization of requests to services are likely to build up and become unwieldy without adequate tools and methods in place just to keep tabs on requests and tie them into an autonomous security apparatus able to work at scale.
Fault tolerance in the microservices environment model grows more complex than a legacy monolithic system. Services must be able to cope with service failures and other timeouts occurring for mysterious reasons. When such service failures pile up, this can affect other services, creating clustered failures. Microservices require a new focus on interdependence and a new model for ensuring that stability across services – easier said than done if a centralized microservices security platform is not in place.
While caching helps to reduce the sheer frequency of requests made, this is a double-edged sword. With the heightened capability caching brings to the DevOps environment, these caching requests inevitably grow to handle a growing number of services. The excess reserve caching provides can grow the complexity as well as the sheer need for inter-service team communication. Automating, ordering, and optimizing this communication becomes a new requirement that may not have existed before with a previously monolithic DevOps environment model.
As the responsibility for microservice integrity spreads out between teams, DevOps is looped into security to a new level of intensity. Thinking with “one security brain” becomes a collective, rather than a hierarchical endeavor. Gone are the days when a security officer could dictate requirements downward in any meaningful way. Collaboration and regular contact points at the macro and micro level become a necessity and must be worked into the data/Dev culture. Good security design is a matter of collaboration and congealing needs and desires against the baseline of known and emerging threats to microservice and the virtualized structures they combine to create.
For all of the reasons we’ve seen unfurled above, we arrive at the priority of forward-looking security design. Far from indicating a “my way” approach from a siloed security authority team, we have come to a point in DevOps culture where all possible factors contribute to considerations that form the basis of a stable security policy and protocol formation process. Without knowing how everyone is going to be impacted inside their changing day-to-day team concerns by security practices, creating a good security design is virtually impossible.
Agile scrum may become the replacement of the authority silo in the microservices model, but how can teams devote more time to scrum on security issues without detracting from their team duties? Due to the many ways complexity is changing the DevOps scene and causing systems to rethink their modus operandi, autonomy will need to replace the human factor not just for service integrity, but for the entire realm of security concerns, if DevOps teams are to stay focused on their core activities.
The security model must take all 10 of the above factors into account if it is to thrive and grow into a self-sufficient, fully-adequate marriage of tools and security culture. Such a holistic system needs to be capable of acting autonomously on known threats, of a continuous machine-learned contribution to the known threat baseline for all those individual, unique APIs, services and diverse containers, as well as some ability to predict where threats might pop up next on this specific environment, rather than merely relying on the baseline alone and thereby creating a false “normal environment” that doesn’t match up with the actual environment.
For more information about taking the logical next steps in securing a microservices DevOps workplace, read about the 5 Best Practices for Securing Microservices at Scale.
Learn how to make your API strategy a competitive advantage.