# Dynamic Kafka ACLs: Implementing Identity-Aware Policies with Kong Event Gateway
Hugo Guerrero
Principal Tech PMM, Kong
Modern Kafka deployments struggle with a familiar tension. You want fine-grained access control per client, per team, and even per request. However, traditional ACLs force you into static, cluster-level configurations that are brittle, hard to scale, and painful to maintain.
Administrators are often forced to manage massive, hardcoded lists of topics and users. But what if you could dynamically craft these ACLs using identity context?
By combining **Kong Event Gateway** with the newly integrated **Kong Identity** (an out-of-the-box OIDC server), you can leverage OAuth or JWT token claims to dynamically control Kafka topic access. In this post, we will walk through how to configure identity-aware policies that completely remove the need to hardcode topic permissions.
The Problem with Traditional Kafka ACLs
Kafka ACLs are powerful, but they come with significant tradeoffs:
Static Definition: They are defined at the broker level and lack context awareness (e.g., who the caller is, their role, or current environment).
Central Bottlenecks: They require central coordination for every single topic change.
Scaling Friction: They don’t scale well across multi-team or multi-tenant environments.
In practice, this leads to over-permissive access policies (the dreaded "just give them topic-*"), operational bottlenecks, and configuration drift across environments.
The Shift: Identity-Aware Policies
Instead of defining access in Kafka, you define it in your identity provider.
Using Kong Identity, you can embed authorization data directly into a token and let Kong Event Gateway enforce it. The token becomes the single source of truth for what a Kafka client is allowed to access, including specific topic names, topic patterns, and contextual scopes (team, environment, application).
At a high level, the architecture looks like this:
A Kafka client authenticates using OAuth (Bearer token).
The token is issued by Kong Identity (OIDC) and contains a custom claim (e.g., topics).
Kong Event Gateway intercepts the request, validates the token via JWKS, extracts the claim, and applies it dynamically to ACL policies.
The client only sees and accesses the allowed topics—zero Kafka ACL updates required.
Step 1: Configuring Token Claims in Kong Identity
The first step is setting up the identity provider to attach specific claims to user scopes.
For this implementation, we configure a custom JSON array claim called topics. This array contains the exact names or expressions of the topics the client is allowed to access (e.g., ["clicks", "transactions"]). This claim is attached to a specific scope (like "Team A" within a "payments" context).
When the Kafka client requests a token, Kong Identity generates a JWT with this array embedded directly in the payload:
Next, we configure our Kafka virtual cluster within Kong Event Gateway to enforce authentication and read these claims.
We enable OAuth Bearer authentication, point the configuration to the Kong Identity JWKS endpoint, and enable token claim extraction so the gateway can access the payload data at request time:
Step 3: Crafting the Dynamic Identity-Aware ACL Policy
With the token validated and claims extracted, we apply an ACL policy to the virtual cluster. Instead of hardcoding topics, we define an expression-based ACL policy that reads directly from the token:
acl:
topics: expression(auth.claims.topics)
Using Kong's expression language, auth.claims.topics dynamically resolves into a list of allowed topics per request. You can still mix static lists, expressions, and patterns, but the real power comes from letting identity drive the authorization layer.
Seeing It in Action
Let's look at how this behaves from the client's perspective using a tool like kafkactl.
Direct Backend Access: If an admin queries the raw backend Kafka cluster, they see a vast list of internal topics (e.g., payments.clicks, payments.transactions, payments.user_actions, plus system topics).
Gateway Access (Without Identity ACL): Querying the virtual cluster with standard routing might strip prefixes and return all localized topics (clicks, transactions, user_actions).
Identity-Aware Access: When the client connects using their generated JWT, Kong Event Gateway intercepts the request. Because their token specifically contains the claim ["clicks", "transactions"], running kafkactl context payments-user list topics will only return clicks and transactions.
The Power of Dynamic ACLs
The true power of this architecture shines when permissions need to change.
If a team suddenly needs access to user_actions, you do not touch the Kafka cluster or update the gateway's ACL policies. You simply update the scope claims in Kong Identity to ["clicks", "user_actions"].
The next time the Kafka client fetches a token, the access is immediately updated. The system adapts instantly when identity changes.
Advanced Patterns
Once you have this foundation, you can extend it further using Kong's expression language:
By shifting Kafka access control to Kong Event Gateway and utilizing JWT token claims, you move from static, infrastructure-defined ACLs to dynamic, identity-driven policies.
This approach reduces operational overhead, eliminates the risks of misconfigured static ACLs, and centralizes your access management. It is a foundational shift in how event systems are secured and operated.
Instead of asking, "What topics should this client access?" you start asking, "Who is this client, and what should they be allowed to do right now?" And the answer lives entirely in the token.
The widespread adoption of Kafka and event streaming platforms is evident across several enterprises, where they serve as the backbone of critical operations, ranging from financial transactions to AI inference pipelines. However, in the domains of
We announced the Kong Premium Technology Partner Program at API Summit 2024, and Confluent was one of the first in the program. This initial development was all about ensuring that the relationship between Kong and Confluent — from a business an
Event streaming allows companies to build more scalable and loosely coupled real-time applications supporting massive concurrency demands and simplifying the construction of services. Ultimately, we may need to grant access to such infrastructure to
Your Kafka Doesn't Have to Live Behind a Wall
When teams resort to VPC peering or PrivateLink to expose Kafka, they're not solving the problem — they're managing it, one network topology decision at a time. Every new external consumer adds compl
Running Kong in front of your Solace Broker adds real benefits: Authentication & Access Control – protect your broker from unauthorized publishers. Validation & Transformation – enforce schemas, sanitize data, and map REST calls into event topics.
The challenges of an acquisition frequently appear in a number of critical areas, especially when dealing with a platform as important as Kafka: API Instability and Change : Merged entities frequently rationalize or re-architect their services, whic
Apache Kafka is a distributed, fault-tolerant, high-throughput event-streaming platform. LinkedIn originally developed it to handle massive data pipelines. The Apache Software Foundation now maintains this open-source project. The Commit Log Mental