APIs are pivotal in the information economy, enabling millions of applications to communicate with one another seamlessly. Thus came the need for the API gateway, middleware that mediates requests between API consumers and upstream services.
An API gateway provides routing, traffic control and security capabilities which would otherwise be the responsibility of API consumers and upstream APIs. Additionally, the Gateway becomes a rich source of operational metrics useful for analytics, usage statistics and alerting.
However, by accepting incoming requests, API gateways became the target of malicious attacks. That’s why organizations need to develop an API security architecture that can dynamically differentiate between legitimate requests and security threats.
This article will introduce the fundamental concepts and technologies necessary to secure an API gateway. It will also introduce the concept of zero-trust and discuss how an API gateway can help.
Why API Security?
Today’s distributed applications often involve interactions between thousands of microservices deployed on-premise and cloud environments. Generally speaking, an API gateway authenticates requests, check their access level and quality of service and routes them to the appropriate service. ,. Although the network and infrastructure underlying individual services may have security mechanisms, the API gateway is the first line of defense.
Conventionally, security was handled by static access control lists (ACLs) being enforced by firewalls. Users were granted access through either on-premise networks or VPN. This is an approach that’s too simplistic for today’s applications, which can span geographical and organizational boundaries. In addition, vulnerability exploits have become more sophisticated, remote workforces are rising, and sophisticated attacks—like DDoS or SQL injection—don’t depend on port-based access.
While APIs are now recognized as major attack vectors, older-style single-point controls like network ACLs for defense are also insufficient because malicious players—whether “script kiddies” or state-sponsored Advanced Persistent Threats—can move laterally between backend systems once they’re through the gateway. This is not just in theory; there have been numerous large-scale incidents caused by insecure APIs, including:
- The breach of T-Mobile’s 2 million customers’ personal data in 2018
- The exposure of 50 million Facebook users’ data in 2018
- The Microsoft Exchange Server hacks discovered in May 2021
The security of an API-centric architecture is paramount. So, how do you secure your API gateway?
Components of an API Security Model
At a high level, an API security model is made up of three components:
- Authentication: validates the identity of the API requestor
- Authorization: validates and enforces the client’s permissions to access the API
- Threat Prevention: takes necessary measures to defend against DDoS attacks, injections, or other external threats
A secure API gateway architecture handles these requirements through numerous interlocking technologies. The specific choice of technology depends on the integration requirements of a given scenario. Authentication and authorization controls are applied to service entities and mapped against the upstream services they represent, meaning authentication is directly validated for those upstream services only. This enables very fine-grained permission controls.
Best Practices for Securing API Gateways
Let’s consider several best practices for securing our APIs and API gateways.
One of the first measures you can take to secure your APIs is to secure all client communications using HTTPS. Additionally, you can regularly rotate the SSL certificates and use separate SSL certificates for different environments of the same application.
API rate limiting defends against excessive API requests overwhelming upstream services—a typical scheme of DDoS attacks. With rate limiting, the API gateway only accepts a set number of simultaneous client requests over a given time interval. Throttling, which is a form of rate limiting, reduces bandwidth or terminates client sessions in the event of overload. Size limiting is another option in which the API gateway blocks client request payloads larger than a specific size.
Authentication and authorization
Privileged content should always be protected by secure API authentication and authorization. Because different APIs accept different types of credentials for granting access, an API gateway should support the majority of these types, including:
- Basic Auth
- API key authentication
- … and more.
In addition, because of the prevalence of third-party identity providers, an API gateway should also support the wide variety of standard protocols used with these providers, including:
- OpenID Connect
- OAuth 2.0
Another security measure is to validate inputs with regular expression checks to find suspicious entries in client requests. Of course, APIs should perform their own security checks and input validation. As a best practice, the development team should regularly audit and monitor API code, ensuring APIs are using up-to-date libraries and following coding best practices.
Monitoring and analytics
Monitoring your APIs provides you with a constant pulse on the health of each service, and it provides visibility into the potential threats or issues your services are currently facing. An API gateway centralizes the task of aggregating metrics and logs.
Metrics related to requests and traffic can be captured centrally by the API gateway. Logging also helps keep an audit trail of all client access requests. Together, this aggregated and centralized data can be exported to Security Information and Event Management (SIEM) tools for analysis, visualization, and alerting.
An API gateway armed with monitoring tools can identify if and when an attack happens, what IPs are involved, and if internal IPs were used for launching attacks.
Leverage serverless functions
Serverless functions, such as AWS Lambdas offered by cloud vendors allow you to run code snippets in their managed and secured computing environments. Serverless functions run code in response to events or HTTP requests.
Once the function runs, the ephemeral computing infrastructure is destroyed. From a security perspective, this effectively removes any backend server from potential attacks. The client only has access to the API gateway in front of the functions.
Monitoring API Security Using SIEM
We have already touched on the necessity of keeping API access logs. SIEM is a special kind of software that can aggregate logs from multiple sources—such as WAF, anti-virus, network, servers and API gateways—into one place. It correlates and analyzes those logs, providing a holistic view of the overall security posture. SIEM tools can find anomalies, threats and attack trends from your API logs, making it part of the secured API gateway architecture.
Additionally, Security Orchestration Automation and Response (SOAR) is an emerging security technology that goes one step further by automatically applying remediation steps against the anomalies and threats detected. SOAR makes extensive use of playbooks that orchestrate and automate such security event detection and response.
The zero-trust model
The principle behind zero-trust is simple: As trust can be exploited, it should never be assumed. Accordingly, this model operates on the service request level, not on the personal ID or account level. It assigns an identity to every service instance for each request.
So, rather than granting access on a system or server level, it’s negotiated one object or service instance at a time. Services are closed by default and can be accessed only by the provision of appropriate credentials.
Zero-trust makes use of mTLS for authentication. By validating the private keys of both parties of a transaction, mTLS dynamically validates the ID of the clients at each end of a connection. The information contained in their separate TLS certificates provides additional verification. This is like a virtual passport with checkpoints at the service object level instead of at the service entry point.
Implementing the zero-trust model is where the service mesh comes into play. The service mesh greatly streamlines API gateway administration. It is a dedicated infrastructure layer on the control plane, handling communications between services or microservices via sidecar proxies.
Sidecar proxies operate on the data plane and expedite fast exchanges between microservices. As the data plane is on the execution path of the service traffic, sidecar proxies provide observability, health checks, routing, security and load balancing capabilities. They are managed from the control plane, which aggregates configuration information (grouped by service or other property) and then pushes them as policies from the control plane (which is not on the execution path) to the data plane.
To learn more about how service meshes can be used for implementing zero-trust, you can refer to this eBook.
APIs will remain a central feature of the digital economy for the foreseeable future. They are essential for today’s complex, distributed applications and an obvious choice for attack because there are so many moving parts.
Securing your API gateway depends on an active implementation of best practices while leveraging tools like SIEM and SOAR and security models like zero-trust.
Kong’s API gateway can handle the most demanding needs of today’s microservice-based applications, and Kuma is an enterprise-grade service mesh built on top of Kuma. Using the best practices discussed above, the security plugins built into the Kong ecosystem of tools allow both of these to be secured. To learn more, contact us today for a personalized demo.