How to Protect APIs with Consumer-Based Rate Limiting
In this API Summit 2024 session, SeatGeek walks through their journey implementing consumer-based rate limiting with Kong Gateway and Kong Ingress Controller to protect APIs at scale in Kubernetes.
Scaling a high-demand ticketing platform
SeatGeek is a leading online ticketing platform that serves a wide range of customers including fans, brokers, and rights holders across the live entertainment industry. As the company continues to grow and expand its business partnerships, maintaining the performance and reliability of its backend services has become a top priority. Josh Woodward, Senior Platform Engineer at SeatGeek, has been spearheading efforts to strengthen platform resilience. With Kong Gateway at the center of their API management infrastructure, Josh’s team sought to address a persistent issue tied to broker-integrated services: unpredictable traffic surges that threatened system stability.
Preventing “noisy neighbors” from disrupting API performance
SeatGeek provides brokers with a data service for ticket listings, and many partners consume this service using high-frequency, automated systems. Even when these partners have good intentions, their traffic could create massive spikes that overwhelm backend services. This “noisy neighbor” issue risked degrading the platform’s overall performance, not just for the partner causing the spike but for others as well. SeatGeek needed a rate limiting strategy that could protect backend services from such overloads while accommodating the unique needs of thousands of integration partners. At the same time, the solution needed to support granular control, allow exceptions for high-priority partners, and integrate cleanly with SeatGeek’s Kubernetes-native infrastructure.
Consumer-based rate limiting built for Kubernetes scale
SeatGeek implemented Kong Gateway with the Rate Limiting Advanced plugin, deploying it via Kong Ingress Controller (KIC) within Kubernetes. This setup allowed the team to enforce default request limits—such as one request per second—while still offering flexibility to grant higher limits to specific partners. Rather than rely on IP addresses or headers to identify callers, the team defined known partners as Kong Consumers, each with key-based credentials. This approach gave them better traceability and control over traffic sources.
To manage the scale of thousands of partners, SeatGeek built a custom controller—a Kubernetes pod that continuously syncs external data sources with the Kubernetes state. This eliminated the need to manually update manifests or store sensitive tokens in plain text. The team also used this system to associate traffic with the correct Kong Consumer, enabling consistent rate limiting enforcement.
Before rolling out configuration changes, the team tested their setup in staging environments using load testing tools. This helped them validate the behavior of rate limiting policies and catch unintended issues before they affected production systems. For new services, SeatGeek implemented low default rate limits from the start, allowing budgets to grow over time. For existing services, they began by profiling traffic patterns and applying guardrails gradually to avoid disrupting long-standing integrations.
Smoother traffic, stronger partnerships
This new approach significantly improved platform stability by flattening traffic spikes and reducing the load on backend services. The team no longer had to scale infrastructure just to accommodate unpredictable surges. Trusted partners who needed higher throughput could be granted custom rate limits without compromising the experience for other users. The custom controller allowed SeatGeek to manage thousands of consumers automatically, reducing operational overhead while maintaining accuracy and security. Testing configurations before rollout prevented service disruptions, and the use of Kubernetes manifests enabled developers to manage their own rate limiting policies safely and autonomously.
Josh emphasized that rate limiting at the gateway layer brought major benefits. It offered centralized control without burdening application code, aligned with platform engineering best practices, and empowered application teams to manage their own traffic policies.
“Rate limiting in the gateway layer is a really nice win. It’s centralized, Kubernetes-native, and not an application concern—but still gives teams the autonomy they need.”
Thanks to Kong Gateway and a thoughtfully designed Kubernetes integration, SeatGeek now delivers a more reliable and scalable API experience for its partners, without sacrificing agility or customer trust.