Layered Security Strategy for Managing APIs
This post is part of a series on becoming a secure API-first company. For a deeper dive, check out the eBook Leading Digital Transformation: Best Practices for Becoming a Secure API-First Company.
As APIs have become mission-critical, securing them against threats is crucial. APIs are an attractive target for attackers, and a single vulnerability can expose an organization's most sensitive information assets.
To properly secure APIs, we have to move beyond basic perimeter defenses. We’ve previously discussed how important consistent API controls across teams are for developing secure API infrastructure.
In this post, we’ll explore the concept of layered API security, outlining the key layers organizations should have in place. Employing security controls at each of these layers establishes defense in depth against various vectors of attack. We’ll overview common API threats and explain how having the right layered model can help mitigate the risks posed by these threats.
What is layered security?
When it comes to API security, the strategies are multi-layered and affect different aspects of the API journey. We can identify at least five layers of security that we need to enforce:
The multi-layered approach to API security.
As the name suggests, layered security refers to the practice of implementing multiple levels of cybersecurity to handle a multitude of potential network attacks.
Think about a company like an onion. At the center is a company’s valuable information and company stability. To get to the center of the onion, the external layers would need to be peeled back one by one. The more layers surrounding the center of the onion, the harder it is to get to. These onion layers represent the multiple levels of cybersecurity. With strong enough layered security in place, a company's information and inner workings are impossible to access.
Layered security for threat protection
What sort of threats does a layered security approach help protect against?
- DDoS attacks — A DDoS (distributed denial of service) attack aims to overwhelm APIs and make them unavailable. A layered defense can use tools like web application firewalls, rate limiting, and load balancing to mitigate these attacks.
- Data breaches — APIs can expose sensitive data that needs to be protected. A layered approach applies controls like authentication, authorization, encryption to safeguard data.
- Malware infections — Malicious actors may try to exploit APIs to plant malware. A layered security strategy scans for vulnerabilities, monitors for anomalies, and isolates compromised components.
- Injections — Attackers may inject malicious code or commands into API calls. Input validation, sanitization, and sandboxing can help prevent successful injections.
- Broken authentication — Flaws in authentication logic can enable account takeover or credential stuffing. A layered model enforces strong, multi-factor authentication across all API access points.
- Excessive usage — Unmetered API usage can facilitate denial of service and brute force attacks. Applying rate limiting protects against excessive calls.
A layered API security strategy combines multiple controls at different levels to provide defense in depth against common API threat vectors. Continue on for a deeper dive into each of these.
Potential threats protected against with layered security
Now that we have established the solution, we should also go over what specifically we are protecting against. Here is a concise list of all major threats and how they could be protected against using layered security:
- DDoS attacks create an overwhelming amount of internet traffic to render all API services useless. Due to the sheer amount of influx, the APIs are overwhelmed and can no longer serve their protective purpose. To prevent this form of attack, a layered security system can implement multiple tools such as web application firewalls, rate limiting, and load balancing to mitigate the impact these attacks may have.
- APIs are given a large amount of responsibility in regulating multiple applications, but sometimes valuable information can slip through the cracks. This is known as a data breach, which is when a person’s or company’s valuable data is either leaked or taken without consent. An easy solution from a layered security perspective is to add controls such as authentication, authorization, or encryption to safeguard data.
- Malware is one of the most common forms of computer viruses and can infect a multitude of technologies. Malicious actors might try to exploit any weaknesses in APIs to plant malware. If not detected early on, this can be used to extract information or give access to people who aren't supposed to be in the system. To prevent this, a layer of security can scan for vulnerabilities in the API system, monitor for anomalies compared to the average usage traffic, and isolate compromised components.
- Related to malware infections (both in meaning and sound) are malware injections. Attackers may inject malicious code or commands into API calls. To prevent this form of attack, security measures such as input validation, sanitization, and sandboxing can be layered among other protocols.
- Sometimes implementing measures to the security layering process can lead to more openings for attacks. While trying to add a layer of security for user authentication, attackers can also break the authentication process to steal information. Using the broken authentication process, attackers are enabled to commit account takeover and credential stuffing. Additional layers can monitor authentication layers and enable strong multi-factor authentication to make sure they are running properly.
- Even overusing your APIs can lead to potential attacks. Excessive API usage can facilitate denial of services and brute force attacks. To solve this issue, users can apply limit rates to protect against excessive calls.
These are a handful of the most common API security issues, but attackers are always finding ways to get the information they want. Security best practice is to implement a layered security system preemptively.
Low-level network security (L3/L4)
The first level of security is known as low-level network security at L3/L4.
This base level is especially important for edge APIs, which are meant to be consumed by third parties outside of the organization. We want to inspect traffic flows using features such as inbound encrypted traffic inspection, stateful inspection, and protocol detection. To achieve this, we can deploy outbound traffic filtering to prevent data loss, help meet compliance requirements, and block known malware communications. The goal is to inspect active traffic flow using features such as stateful inspection, protocol detection, and more. In short, we want to ensure that incoming traffic is legitimate before even thinking about moving to the next layer of security.
While these strategies are required for external API traffic, forward-thinking organizations also implement a tight level of security on this layer for internal traffic to prevent internal malicious actors and bots that could affect the systems. For many leaders in API security, this is perhaps an important distinction to make: we’re protecting ourselves from both external and internal threats. Not implementing the latter creates a vulnerability in the organization since an attacker may be running malicious software internally via other backdoors outside of the API perimeter. But that can now affect the API infrastructure itself. Internal software is not to be trusted, which brings us to the next layer of security.
Zero trust security framework
The second layer of security is implementing the concept of zero-trust — to remove the concept of trust in our applications and services. We can’t trust that the client is who they claim to be, and we similarly can’t trust internal or external clients.
Let’s give a practical example: when traveling to a foreign country, typically we need to carry our passport with us to prove our identity at the border. The passport is a document that validates that we are who we claim to be. Without passports, the immigration agents would have no way of knowing if our identity is legitimate — they would have to take our word for it. How long would it take for malicious actors to exploit this system and start impersonating other people? How many criminals would just walk into our country without any possibility to stop them?
The concept of “trust” is fundamentally exploitable — we can’t rely on it. We need to have a practical way to determine the identity of people, hence we need a passport that removes the concept of trust from our immigration process. We need zero trust.
Our APIs are like borders with no immigration control: anyone and anything can make requests to them and easily spoof the identity of other clients to perform malicious operations. In order to secure our “API borders” we need to implement a zero-trust solution that validates the identity of every client. That passport can be an mTLS certificate issued to each instance of any service and validated upon every request.
Associating TLS identities to our requests allows us to implement zero-trust and tight permission controls. Managing too many TLS certificates is challenging, but service mesh makes it easy.
Service mesh architecture
Implementing a zero-trust architecture across every environment that we use to deploy our applications and microservices could be a complex task. It would involve supporting enforcement across all runtimes and issuing, rotating, and revoking certificates for each service instance — potentially hundreds of thousands or even millions.
Thankfully, implementing a service mesh can help us with this endeavor; the teams can be users of zero-trust and not the builders.
A service mesh allows us to manage the entire certificate lifecycle (issuance, rotation, revocation) in an automated way. During this process, the enforcement is delegated to a sidecar proxy transparently running next to our services.
Kong’s solution
Kong Mesh extends CNCF’s Kuma and Envoy and allows you to implement an enterprise zero-trust solution across the organization in days instead of years. It handles the whole certificate lifecycle across both containers and legacy VMs, across clouds, and even private data centers.
API authentication and authorization
After we’ve validated the identity of the services consuming our APIs, we need to further identify the user trying to perform the request via AuthN/Z strategies — like validating an API key or integrating with a third-party OpenID Connect provider (OIDC) or OAuth provider to determine both the identity of the user and the permission level (entitlements) that determine what operations the user can do on the API. And we need this level of security across every API that our teams are building. Therefore, it would be a smart idea to centralize how these policies are being enforced rather than decoupling across the board, which creates an incredible security risk.
Kong provides a range of AuthN/Z plugins to seamlessly provide authentication and authorization out of the box to every edge or internal API.
After validating the user, and therefore validating the legitimacy of the incoming request, we may want to still limit the traffic that they’re sending to the API. In a way, this type of control resembles the first layer of security. We may want to implement rate-limiting or throttling strategies that can help us restrict the access to the APIs. Doing this helps to prevent issues like cascading failures if too much traffic is being sent or creating tiers of consumption to our APIs that we can charge for as an additional revenue stream.
Kong’s solution
Kong provides sophisticated traffic control capabilities both at the L4 mesh layer and at the L7 API gateway layer — and enforces them with performance and low latency to preserve the quality of the end-user experience.
API monitoring and analytics
Finally, even when all traffic has been validated through each of the previous security layers, the most sophisticated enterprise organizations will still create API traffic models to determine if the known users are executing any unexpected operation. This may be caused by a known user turning malicious or by malicious actors coming to possess a known user’s credentials.
Ultimately, our API users are also susceptible to attacks, and their credentials may be compromised without us knowing it. To control this issue, we may want to adopt heavy API monitoring and analytics combined with async ML capabilities that create a model of our traffic for each client and user. Again, these types of enforcements are hard to build on a per-team basis and must be provided by the platform team in charge of building the mission-critical (and security-approved) API infrastructure for the organization.
Policy enforcement
Lastly, our API policies are ultimately configured and set up by employees working for our organization. No layer of security is immune to human misconfiguration of the system — be it accidental or intentionally malicious. Every organization must enforce a policy approval workflow (a standardized approach to API governance) that requires the approval of at least another person in the organization working on a different team.
Layered security is only as strong as it’s policy enforcement
Policy enforcement also encompasses many layers:
- Validating that required policies are being applied
- Validating that policies aren’t misconfigured
- Validating that policies aren’t malicious or causing unexpected results
Like the other topics we’ve covered, policy enforcement is also a cross-cutting concern that affects every team in the organization. To make our teams more productive, we should provide them with an out-of-the-box solution that they can start using right away — one that is validated by both the platform and security teams of the organization.
Policy enforcement and control is compliance. As such, it’s mission-critical. It can’t be decomposed into a fragmented solution that the individual teams are responsible for. Otherwise, we would have no visibility (and be showing a lack of corporate responsibility) into assessing the quality and validity of our API infrastructure policies.
We can further limit the margin of error by providing global policies that can’t be changed by the teams and are always applied, like security policies.
An example workflow could be:
Kong’s solution
Kong provides API controls for policy enforcement and linting throughout Kong Konnect that can be integrated via APIs and GitOps to any existing solution your organization may be adopting.
Conclusion
APIs have become crucial gateways to data and services — and they have a massive economic impact, with APIs projected to contribute $14.2 trillion to the global economy by 2027. The proliferation of APIs demands more advanced, multi-layered security strategies.
Adopting a layered security model is the most effective way to safeguard APIs. It provides overlapping security that protects against common attack vectors like DDoS, injections, data theft, and more.
Between 2021 and 2030, there will be a projected 996% surge in the number of API attacks. And attacks are increasing not just in regularly but in cost. As threats escalate, the security standards must follow suit. A layered strategy delivers the intelligent, adaptive defense today's APIs demand.
To learn more about security in an API-first world, check out the Kong eBook Leading Digital Transformation: Best Practices for Becoming a Secure API-First Company.