The plugin can be applied at different levels such as service, route, specific consumer, or even in a global scope. By having this flexibility, one can set a generic global limit but then still overrule the global limit to allow for a more specific rate limit at a lower level.
However, after working with many of Kong customers, one use case couldn’t be met with the specificity highlighted above. This use case is how can Kong help when customers want to have different rate limits based on an organization, partner, or tenant? The answer is Kong’s feature of “Consumer Groups,” which we’ll expand on below (and is documented [here)](https://docs.konghq.com/gateway/latest/admin-api/consumer-groups/examples/)here).
Released on 2.7, the Kong API gateway allows you to define limits per consumer groups. This means that one can still use the general RL functionality as mentioned above, but also add specific limits to certain groups. Let’s see how we can make it work.
**Add a service**
➜ demo-environment git:(main) ✗ http post localhost:8001/services name=prod-backend url=http://httpbin.org/anythingHTTP/1.1201 Created
{"ca_certificates":null,"client_certificate":null,"connect_timeout":60000,"created_at":1655216213,"enabled":true,"host":"httpbin.org","id":"9c77f727-42d2-4f75-b194-9b6dd9fcfe5b","name":"prod-backend","path":"/anything","port":80,"protocol":"http","read_timeout":60000,"retries":5,"tags":null,"tls_verify":null,"tls_verify_depth":null,"updated_at":1655216213,"write_timeout":60000}
**Add a route**
➜ demo-environment git:(main) ✗ http post localhost:8001/services/prod-backend/plugins name=rate-limiting-advanced config:='{"sync_rate":0,"window_size":[60],"limit":[10],"enforce_consumer_groups":true,"consumer_groups":["hr","marketing"]}'
HTTP/1.1201 Created
{"config":{"consumer_groups":["hr","marketing"],"dictionary_name":"kong_rate_limiting_counters","enforce_consumer_groups":true,"header_name":null,"hide_client_headers":false,"identifier":"consumer","limit":[10],"namespace":"vCcg7OpfB1rGxRrCaJ3vEytelRyqUfyZ","path":null,"redis":{"cluster_addresses":null,"connect_timeout":null,"database":0,"host":null,"keepalive_backlog":null,"keepalive_pool_size":30,"password":null,"port":null,"read_timeout":null,"send_timeout":null,"sentinel_addresses":null,"sentinel_master":null,"sentinel_password":null,"sentinel_role":null,"sentinel_username":null,"server_name":null,"ssl":false,"ssl_verify":false,"timeout":2000,"username":null},"retry_after_jitter_max":0,"strategy":"cluster","sync_rate":0,"window_size":[60],"window_type":"sliding"},"consumer":null,"created_at":1655216600,"enabled":true,"id":"cf9ade32-7be9-49c4-9414-1e2e25a4c74e","name":"rate-limiting-advanced","protocols":["grpc","grpcs","http","https"],"route":null,"service":{"id":"9c77f727-42d2-4f75-b194-9b6dd9fcfe5b"},"tags":null}
**Add 2 consumer groups — we will assign different users to different groups later on to test our functionality**
**As seen above, both John and Sarah have a limit of 1000 RPM as we wanted.**
### **Summary**
As we can see, it is very easy to configure Kong to rate limit your traffic with the relevant requirement for your use case — be it security, performance, or business use case.
Traditional APIs are, in a word, predictable. You know what you're getting: Compute costs that don't surprise you Traffic patterns that behave themselves Clean, well-defined request and response cycles AI APIs, especially anything that runs on LLMs
Running Kong in front of your Solace Broker adds real benefits: Authentication & Access Control – protect your broker from unauthorized publishers. Validation & Transformation – enforce schemas, sanitize data, and map REST calls into event topics.
Kong has supported Redis since its early versions. In fact, the integration between Kong Gateway and Redis is a powerful combination to enhance API management. We can summarize the integration points and use cases of Kong and Redis into three main g
Observability has become critical to ensuring the effective monitoring of application and system performance and health. It focuses on understanding a system’s internal state by analyzing the data it produces in the context of real-time events and a
Understanding and monitoring the performance and health of applications and systems is critical. This is where observability comes into play. Observability is about gaining a comprehensive understanding of a system's internal state by analyzing the
In this blog post, we will explore how organizations can leverage Kong and OpenTelemetry to establish and monitor Service Level Objectives (SLOs) and manage error budgets more effectively. By tracking performance metrics and error rates against pred
We announced the Kong Premium Technology Partner Program at API Summit 2024, and Confluent was one of the first in the program. This initial development was all about ensuring that the relationship between Kong and Confluent — from a business an