The AI gateway as scalable PII leak-proofing
Just as an API gateway manages, secures, and transforms API traffic (and abstracts away the logic required for this from the backend API layer) an AI gateway gives you control, visibility, and policy enforcement for LLM traffic — ideally including built-in PII sanitization — and abstracts the PII sanitization logic away from the LLM and/or application layers.
At Kong, we just released a brand new PII sanitization policy that enables just this.
Here’s how it works in practice:
1. Policy config: The producer configures the sanitization plugin to automatically sanitize any inbound request of certain types of PII

2. Inbound: A client app sends a user request to the AI gateway. The gateway detects and redacts PII (names, emails, etc.) before forwarding it to the LLM.
3. LLM interaction: The prompt is processed with sanitized data, ensuring no sensitive info reaches the model.
This makes the AI gateway a trusted policy enforcement point between applications and models. But is it enough?
Learn more about how to start sanitizing PII using the AI Sanitizer plugin.
Building in PII sanitization and AI security at scale with global policies, control plane groups, and APIOps
The reality is that just having an AI gateway with this functionality isn’t enough to enforce proper AI security and PII sanitization at scale.
You must build a platform practice around the other layers of AI security as well. And that means you must combine the power of the AI gateway’s PII sanitization functionality with other layers of protection around content safety, prompt guarding, rate limiting, etc. And then, to drive AI governance and security at scale, you’ll need to combine the power of multi-layer protection with the power that comes from a federated platform approach to provisioning and governing AI gateway infrastructure.
Kong enables all of this through the unification of the industry’s most robust AI gateway with the platform power of Kong Konnect control plane groups, global policies, and APIOps. How does this work?
We cover the concept of control plane groups in this video, but here’s a quick summary:
1. Platform owners can create control plane groups within Konnect — typically mapping onto lines of business and/or different development environments

2. Once the control plane group is created, the platform owner can then configure global policies for that group. In this instance, the PII sanitization policy could be enforced as a non-negotiable policy for any AI Gateway infrastructure that falls under this group.

3. Now, any time somebody from this specific team spins up Gateway infrastructure for their LLM exposure use cases, that PII sanitization policy is automatically configured and enforced.
Notice what this approach does. Yes, the AI Gateway is abstracting away the PII sanitization logic from the LLM or client app layers, as already mentioned. But, with the larger platform in place, platform owners can also abstract away the actual configuration of the PII sanitization policy from the developer — which both lowers the possibility of human error upon policy config and removes yet another task from the developer’s workflow, enabling them to focus on building core AI functionality instead of security logic on top of that functionality.
One thing to note: The process above was manual and “click-ops” oriented. But, like everything we do here at Kong, we believe the best practice is to enforce best practices such as these via automation and APIOps, ultimately enabling an “AI governance as code” program that leaves as little room for human error as humanly (or machine-ly?) possible.
Kong makes APIOps simple, with support for:
- Imperative configuration via our fully-documented Admin API
- Declarative configuration (for non-Kubernetes teams) via our decK CLI tool and/or Terraform provider
- Declarative configuration for Kubernetes teams with Gateway Operator