Engineering
September 11, 2019
6 min read

5 Best Practices for Securing Microservices at Scale

Kong

As outlined in a previous article on security challenges for microservices, DevOps are getting more widely distributed, spread thin, and forced to plan for higher levels of interactivity as well as evolving national security "backdoor" measures.

Microservices, born from a still-emerging DevOps laboratory environment, can be deployed anywhere: on-prem, in the public cloud, or a hybrid implementation. Regardless of where the microservices are hosted, however, they require a scalable security strategy. Providing such security at scale becomes difficult for organizations with legacy computing environments that are transitioning to microservices. These new challenges require a plan that can unfold from infancy into maturity.

Having greater experience in your organization doesn't necessarily create an advantage when grappling with the security/threat ecosystem of intertwined microservices. With this in view, we've prepared five generalized best practices to help any organization get up to speed on microservices security faster.

1. Use an API gateway

Many intrusions and more complex threats start at the access, authentication, or authorization levels. Organizations that expose an API for access outside of their organization should be sure to deploy some form of API gateway for security purposes. Whether using a device-based approach or an intelligent reverse proxy, API requests should all be inspected and authenticated. Also, API requests should be appropriately vetted, so that only the requests fitting desired criteria are permitted. Further, to ensure auditability, each and every transaction should be logged.

TLS may be used to encrypt API traffic to further secure data and communications. All API clients should be authenticated through the use of an application identifier, as well as a user identifier, such as via an SSL/TLS certificate. An OAuth authentication token may also be used. Requiring anonymous API requests to employ a unique user identifier is a must, and other traffic control policies, such as response limiting should be applied in addition to logging.

2. Scan and protect at the container and microservice level

The first rule of securing microservices is to look at the microservice itself. End-to-end microservice monitoring is critical in this regard, but the container is also an ongoing locus for attack and intrusion activity. In addition to conducting periodic vulnerability and security scanning, practitioners should always keep in their mind that the underlying container image can itself become corrupted once infiltrated. To combat this, the container image signature should always be verified or closed off until it can be verified. Using unverified signatures for session traffic within containers should be avoided at all costs as it can lead to inappropriate access or breaches. Automating the regular testing of containers during production or CI processes becomes critical when running microservices at scale.

Orchestration management can also help to mitigate this and related problems by producing abstractions that specify the number of containers required to run a given image and any required resources needed to host them. Using an orchestration manager such as Kubernetes provides a critical layer of security in the ideal multi-layer security model.

3. Prioritize vulnerabilities with defense-in-depth

The original concept of "defense-in-depth" was a product of the National Security Agency (NSA)'s own comprehensive approach towards information security and safety measures. With defense-in-depth, more is finally more again. This is due to the fact that the defense-in-depth model relies on multiple (more) layers of security controls placed in (more) key points throughout the system. This helps prevent a unified attack from easily penetrating and proliferating within a system.

Another aspect of defense-in-depth requires treating a vulnerability buried under layers of protection as exposed. Redundancy measures like this can prevent panics down the road by forcing more rapid action to address the issue. Such preventative security vigilance is a risk-management must-have for many organizations with sensitive data or strict anti-risk policies.

It is now generally accepted in the microservices and security communities that a multi-layered, agile approach provides the best security posture for microservices due to the highly distributed nature of the environment. The evolving nature of threats that employ some degree of machine-learning and AI further emphasizes the need for a multi-layered approach. Even without such technology, malicious actors can stake out a system over time, learn it, and then plan a sequenced line of attack. Often, security teams defending the home environment from perceived "larger threats" may become desensitized to more common "low-level" threats. This conditioning can create significant downstream issues as many sophisticated threats often enter in stages, stealthily scouting and preparing the attack pathway through modest, low-level threats.

The multi-layer approach takes into account the complexity of human intentions and seeks to expose plans to infiltrate a system. The authentication layer of defense can knock out a great number of simpler attacks as well as some more complicated infiltration schemes that revolve around prolonged spying on the system. More powerful and unconventional attacks may break through initial security layers, but successive layers at levels closer to the heart of operations can still prevent them.

While the threat "hydra" has grown new heads, a multilayer system retains the advantage of incorporating more layers of security than it hopefully ever needs, without adding excess overhead, and with the flexibility to react to new attack patterns. Machine learning can also help overburdened teams better deal with the volume and sophistication of coordinated attackers lurking beyond the security barrier.

4. Simplify monitoring, security, and baseline with a central tool

To ensure security and integrity for the microservice and the API, practitioners should incorporate a firewall distributed across microservices with granular control. As microservices proliferate, the ability to zero in on an event and untangle or associate it with other events and known actors becomes essential and requires a centralized monitoring tool that can help connect the dots.

Automatic updates represent another way that centralized monitoring can protect the DevOps environment, production, and any operations currently underway. With automated updates as part of a centralized monitoring method, fewer cracks exist in the wall. A centralized monitoring method can also help prevent the need to write crypto code, which can quickly become too unwieldy for many development teams.

Management of security facets is easily complicated beyond the reach of smaller teams new to microservices deployment at scale. Factoring in this growth adaptation curve, practitioners can easily miss the management side of the complexity and inadvertently can hobble many IT departments trying to create teams and manage the transition from a monolithic model to microservices.

5. Ensure security compliance of the host operating platform

Since your deployment may be either on-prem or in the Cloud, it's crucial to know your host's security protocols and track record. If operating with a PaaS hosting setup, you'll want to verify that the platform takes all due precautions to ensure that other platform users, rogue applications, or lack of due diligence compromises platform security. This can get tricky since many organization traditionally allow users to test their own security, but not the overall security of the platform itself.

DevOps can use the CIS Docker Benchmark to advise developers and security teams on considerations for elements such as establishing the appropriate protocols for user authentication, authorizations, and the setting of access roles. It may also diagnose other special areas like a need to enable audit data logging if this does not exist. Combining crash reporting with logging can dramatically expand the overall usefulness of logging in triangulating issues. Specification of binary file access permissions is also a common recommendation.

If you don't plan to host your own operations, be sure to verify the Cloud host proposed can pass the comprehensive benchmark checklist. Selecting the right operating system host represents a critical decision in ensuring DevOps security and integrity of production.

In conclusion

Examining each of the five topic areas above and exploring their individualized implication for your own DevOps needs can yield some surprising revelations and inspire much-needed shifts in perception for internal computing culture. It's crucial to involve all potential stakeholders to see how their individual and collective needs can best be met simultaneously by the right choices and the right centralized monitoring tools.

For more information on the security problems that arise with microservices, read 10 Ways Microservices Create New Security Challenges.