In the age of surgical robots, smart refrigerators, self-driving vehicles and unmanned aerial vehicles, connectivity undoubtedly is a foundational block for our modern world. Connectivity not only enables easy access to resources, but it also opens up opportunities to drive innovation by connecting isolated systems. Connectivity drives digital transformation.
With connectivity being a key in how we live our lives, “connected everything” becomes a target for attackers. Attackers exploit connectivity for monetary gain by orchestrating sophisticated cyber attacks. Some recent cyber attacks resulted in exposure of confidential information, loss of data and productivity, along with incidents causing direct physical damage. Securing connected systems at scale is essential for the continuity and safety for everyone today.
With the proliferation of connected systems, the number of APIs and connections between systems have exploded in the last few years. Securing connectivity requires more than just securing the microservices exposed at the edge for external consumption as APIs (aka edge connectivity). Given the complexity and ubiquity of network perimeters, securing the connectivity between applications, and the connectivity between services within applications becomes critical for modern architectures.
Seeing all connections within applications, inside the organization and at the perimeter is the first step to understanding the potential attack surface. Observability plays an important part in understanding the scope for securing connectivity in distributed and heterogeneous environments.
Securing Connectivity at Scale
Securing connectivity at scale requires conscious effort. While there is single no specific approach, we share a high-level model that can be applied in a cyclical manner:
- Gain comprehensive visibility into performance, security and conﬁgurations across services, teams and environments.
- Gain consistency across distributed services through ﬁne-grained trafﬁc and security policies.
- Encode governance into onboarding via role-based access control (RBAC).
Done completely and repeatedly, these steps help uncover and resolve security and compliance issues faster by tracking and auditing conﬁguration changes, and thereon automating them. They also may help detect potential security vulnerabilities from anomalies by autonomously monitoring trafﬁc.
Let us take a look at five architectural patterns that organizations can implement to achieve secure connectivity at scale.
5 Secure Connectivity Patterns to Scale
We are going from simple to more complex, with the caveat that these are just five patterns. There are many more out there.
Pattern #1: Implement an API Gateway
Risk: APIs are exposed with inconsistent policy enforcement and implementation.
We begin with a simple pattern that can be easily appreciated. We have a variety of APIs, some are secured, some are not. Even if the APIs are secured, they may be secured differently. These APIs can become attack vectors due to insufficient protection and broad exposure.
The classic mitigation to this is to standardize the application of API protection policies.
An API gateway channeling all traffic addresses this risk. Now all APIs are secured in the same manner.
Pattern #2: Adopt APIOps
Risk: Multiple gateways for APIs, with inconsistent policy enforcement.
Scaling up with a few gateways, each managing a set of APIs, could lead to inconsistent configurations. How do we get to a position where we are sure all teams are protecting their APIs sufficiently?
This is similar to the first pattern, except at a larger scale. The mitigation is to use APIOps to ensure the same standards and controls are applied to the different gateways. APIOps benefits go beyond security. They help speed up the developer lifecycle, addressing issues including onboarding and documentation, in addition to quality, and performance.
Pattern #3: Multilayer security
Risk: Internal APIs are exposed externally with insufficient protection.
This is a slightly more complex pattern. We have an API that we are using internally, which we secured, but as time goes on, teams change, and API owners change. The API is then being used and exposed in ways we did not anticipate, so we have insufficient protection in those new use cases.
The mitigation is layering APIs policies, and enforcing the right set of policies at each layer, internal or external. This approach is efficient as it leverages existing policies and does not repeat their applications, potentially leading to inconsistencies.
As a general rule, we do not want to have external consumers and internal consumers using the same APIs over the same channel. By layering gateways, we are separating the external consumers from the internal consumers.
Pattern #4: Improve Threat Detection and Prevention
Risk: With a large volume of API consumption, static rules will have a difficult time accurately detecting malicious activity without triggering false positives.
Assuming there are numerous APIs serving large volumes of traffic, it is possible that there may be some malicious activity. Scanning, or probing is a technique used by malicious actors that are looking for exploits or gaps that can be collected and leveraged. With large traffic volumes, identifying said activity is akin to searching for a needle in the haystack.
It is important to have sufficient logging and traceability to look for clues of this activity. Various network logging and monitoring tools exist for this purpose. Given the large volume of data, this is a good opportunity for using artificial intelligence.
The Immunity feature in Kong operates on this premise. It uses artificial intelligence to try to detect anomalies. These include extraordinary parameter values, or unusually long latency for an API. It may also be an unusual response and status code, or even traffic load. These clues may uncover threats. The API requests are not denied, so in a sense it is not full protection. Therefore further action will need to be taken.
Pattern #5: Credentials as a Service
Risk: Inconsistent authorization due to binding authentication with authorization.
Our final pattern, credentials as a service or CaaS, is the most complex pattern on this list. Assume there is an application that uses a mechanism to authenticate and authorize API clients and it is working well. OpenID Connect is an example of a standard that addresses both authentication and authorization.
Replicating this capability with other security mechanisms, that may be legacy perhaps, or to another standard altogether, may not always be possible or even supported. Persisting with this approach will result in a high degree of complexity by introducing potentially proprietary extensions to other mechanisms for authentication and authorization. Aside from being a maintenance drain, this constitutes a security risk.
The mitigation is to separate authentication from authorization. This allows various mechanisms for authentication which then can be kept in a session. From there, authorization enforcement can take place over a standard like OPA, perhaps. By separating the authentication from the authorization, more authentication mechanisms may be used, while keeping our authorization consistent.
To close, here are two trends to look out for when thinking about security at scale:
- Cloud native & multi-cloud environments (Read more in this interview with Reza Shafii.)
- Zero-trust security (Watch Kong’s Destination: Zero Trust digital event on demand).
Subscribe to Our Newsletter!
WEBINAR: Explore New Kuma 1.2 Release