WHY GARTNER’S “CONTEXT MESH” CHANGES EVERYTHING AI CONNECTIVITY: THE ROAD AHEAD DON’T MISS API + AI SUMMIT 2026 SEPT 30 – OCT 1
  • [Why Kong](/company/why-kong)Why Kong
    • Explore the unified API Platform
        • BUILD APIs
        • [
          Kong Insomnia](/products/kong-insomnia)
          Kong Insomnia
        • [
          API Design](/products/kong-insomnia/api-design)
          API Design
        • [
          API Mocking](/products/kong-insomnia/api-mocking)
          API Mocking
        • [
          API Testing and Debugging](/products/kong-insomnia/api-testing-and-debugging)
          API Testing and Debugging
        • [
          MCP Client](/products/kong-insomnia/mcp-client)
          MCP Client
        • RUN APIs
        • [
          API Gateway](/products/kong-gateway)
          API Gateway
        • [
          Context Mesh](/products/kong-konnect/features/context-mesh)
          Context Mesh
        • [
          AI Gateway](/products/kong-ai-gateway)
          AI Gateway
        • [
          Event Gateway](/products/event-gateway)
          Event Gateway
        • [
          Kubernetes Operator](/products/kong-gateway-operator)
          Kubernetes Operator
        • [
          Service Mesh](/products/kong-mesh)
          Service Mesh
        • [
          Ingress Controller](/products/kong-ingress-controller)
          Ingress Controller
        • [
          Runtime Management](/products/kong-konnect/features/runtime-management)
          Runtime Management
        • DISCOVER APIs
        • [
          Developer Portal](/products/kong-konnect/features/developer-portal)
          Developer Portal
        • [
          Service Catalog](/products/kong-konnect/features/api-service-catalog)
          Service Catalog
        • [
          MCP Registry](/products/mcp-registry)
          MCP Registry
        • GOVERN APIs
        • [
          Metering and Billing](/products/kong-konnect/features/usage-based-metering-and-billing)
          Metering and Billing
        • [
          APIOps and Automation](/products/apiops-automation)
          APIOps and Automation
        • [
          API Observability](/products/kong-konnect/features/api-observability)
          API Observability
        • [Why Kong?](/company/why-kong)Why Kong?
      • CLOUD
      • [Cloud API Gateways](/products/kong-konnect/features/dedicated-cloud-gateways)Cloud API Gateways
      • [Need a self-hosted or hybrid option?](/products/kong-enterprise)Need a self-hosted or hybrid option?
      • COMPARE
      • [Considering AI Gateway alternatives? ](/performance-comparison/ai-gateway-alternatives)Considering AI Gateway alternatives?
      • [Kong vs. Postman](/performance-comparison/kong-vs-postman)Kong vs. Postman
      • [Kong vs. MuleSoft](/performance-comparison/kong-vs-mulesoft)Kong vs. MuleSoft
      • [Kong vs. Apigee](/performance-comparison/kong-vs-apigee)Kong vs. Apigee
      • [Kong vs. IBM](/performance-comparison/ibm-api-connect-vs-kong)Kong vs. IBM
      • GET STARTED
      • [Sign Up for Kong Konnect](/products/kong-konnect/register)Sign Up for Kong Konnect
      • [Documentation](https://developer.konghq.com/)Documentation
      • FOR PLATFORM TEAMS
      • [Developer Platform](/solutions/building-developer-platform)Developer Platform
      • [Kubernetes and Microservices](/solutions/build-on-kubernetes)Kubernetes and Microservices
      • [Observability](/solutions/observability)Observability
      • [Service Mesh Connectivity ](/solutions/service-mesh-connectivity)Service Mesh Connectivity
      • [Kafka Event Streaming](/solutions/kafka-stream-api-management)Kafka Event Streaming
      • FOR EXECUTIVES
      • [AI Connectivity](/ai-connectivity)AI Connectivity
      • [Open Banking](/solutions/open-banking)Open Banking
      • [Legacy Migration](/solutions/legacy-api-management-migration)Legacy Migration
      • [Platform Cost Reduction](/solutions/api-platform-consolidation)Platform Cost Reduction
      • [Kafka Cost Optimization](/solutions/reduce-kafka-cost)Kafka Cost Optimization
      • [API Monetization](/solutions/api-monetization)API Monetization
      • [AI Monetization](/solutions/ai-monetization)AI Monetization
      • [AI FinOps](/solutions/ai-cost-governance-finops)AI FinOps
      • FOR AI TEAMS
      • [AI Governance](/solutions/ai-governance)AI Governance
      • [AI Security](/solutions/ai-security)AI Security
      • [AI Cost Control](/solutions/ai-cost-optimization-management)AI Cost Control
      • [Agentic Infrastructure](/solutions/agentic-ai-workflows)Agentic Infrastructure
      • [MCP Production](/solutions/mcp-production-and-consumption)MCP Production
      • [MCP Traffic Gateway](/solutions/mcp-governance)MCP Traffic Gateway
      • FOR DEVELOPERS
      • [Mobile App API Development](/solutions/mobile-application-api-development)Mobile App API Development
      • [GenAI App Development](/solutions/power-openai-applications)GenAI App Development
      • [API Gateway for Istio](/solutions/istio-gateway)API Gateway for Istio
      • [Decentralized Load Balancing](/solutions/decentralized-load-balancing)Decentralized Load Balancing
      • BY INDUSTRY
      • [Financial Services](/solutions/financial-services-industry)Financial Services
      • [Healthcare](/solutions/healthcare)Healthcare
      • [Higher Education](/solutions/api-platform-for-education-services)Higher Education
      • [Insurance](/solutions/insurance)Insurance
      • [Manufacturing](/solutions/manufacturing)Manufacturing
      • [Retail](/solutions/retail)Retail
      • [Software & Technology](/solutions/software-and-technology)Software & Technology
      • [Transportation](/solutions/transportation-and-logistics)Transportation
      • [See all Solutions](/solutions)See all Solutions
  • [Pricing](/pricing)Pricing
      • DOCUMENTATION
      • [Kong Konnect](https://developer.konghq.com/konnect/)Kong Konnect
      • [Kong Gateway](https://developer.konghq.com/gateway/)Kong Gateway
      • [Kong Mesh](https://developer.konghq.com/mesh/)Kong Mesh
      • [Kong AI Gateway](https://developer.konghq.com/ai-gateway/)Kong AI Gateway
      • [Kong Event Gateway](https://developer.konghq.com/event-gateway/)Kong Event Gateway
      • [Kong Insomnia](https://developer.konghq.com/insomnia/)Kong Insomnia
      • [Plugin Hub](https://developer.konghq.com/plugins/)Plugin Hub
      • EXPLORE
      • [Blog](/blog)Blog
      • [Learning Center](/blog/learning-center)Learning Center
      • [eBooks](/resources/e-book)eBooks
      • [Reports](/resources/reports)Reports
      • [Demos](/resources/demos)Demos
      • [Customer Stories](/customer-stories)Customer Stories
      • [Videos](/resources/videos)Videos
      • EVENTS
      • [API + AI Summit](/events/conferences/api-ai-summit)API + AI Summit
      • [Agentic Era World Tour](/agentic-era-world-tour)Agentic Era World Tour
      • [Webinars](/events/webinars)Webinars
      • [User Calls](/events/user-calls)User Calls
      • [Workshops](/events/workshops)Workshops
      • [Meetups](/events/meetups)Meetups
      • [See All Events](/events)See All Events
      • FOR DEVELOPERS
      • [Get Started](https://developer.konghq.com/)Get Started
      • [Community](/community)Community
      • [Certification](/academy/certification)Certification
      • [Training](https://education.konghq.com)Training
      • COMPANY
      • [About Us](/company/about-us)About Us
      • [We're Hiring!](/company/careers)We're Hiring!
      • [Press Room](/company/press-room)Press Room
      • [Contact Us](/company/contact-us)Contact Us
      • [Kong Partner Program](/partners)Kong Partner Program
      • [Enterprise Support Portal](https://support.konghq.com/s/)Enterprise Support Portal
      • [Documentation](https://developer.konghq.com/?_gl=1*tphanb*_gcl_au*MTcxNTQ5NjQ0MC4xNzY5Nzg4MDY0LjIwMTI3NzEwOTEuMTc3MzMxODI2MS4xNzczMzE4MjYw*_ga*NDIwMDU4MTU3LjE3Njk3ODgwNjQ.*_ga_4JK9146J1H*czE3NzQwMjg1MjkkbzE4OSRnMCR0MTc3NDAyODUyOSRqNjAkbDAkaDA)Documentation
  • [](/search)
  • [Login](https://cloud.konghq.com/login)Login
  • [Book Demo](/contact-sales)Book Demo
  • [Get Started](/products/kong-konnect/register)Get Started
[Blog](/blog)Blog
  • [AI Gateway](/blog/tag/ai-gateway)AI Gateway
  • [AI Security](/blog/tag/ai-security)AI Security
  • [AIOps](/blog/tag/aiops)AIOps
  • [API Security](/blog/tag/api-security)API Security
  • [API Gateway](/blog/tag/api-gateway)API Gateway
|
    • [API Management](/blog/tag/api-management)API Management
    • [API Development](/blog/tag/api-development)API Development
    • [API Design](/blog/tag/api-design)API Design
    • [Automation](/blog/tag/automation)Automation
    • [Service Mesh](/blog/tag/service-mesh)Service Mesh
    • [Insomnia](/blog/tag/insomnia)Insomnia
    • [View All Blogs](/blog/page/1)View All Blogs
We're Entering the Age of AI Connectivity [Read more](/blog/news/the-age-of-ai-connectivity)Read moreProducts & Agents:
    • [Kong AI Gateway](/products/kong-ai-gateway)Kong AI Gateway
    • [Kong API Gateway](/products/kong-gateway)Kong API Gateway
    • [Kong Event Gateway](/products/event-gateway)Kong Event Gateway
    • [Kong Metering & Billing](/products/usage-based-metering-and-billing)Kong Metering & Billing
    • [Kong Insomnia](/products/kong-insomnia)Kong Insomnia
    • [Kong Konnect](/products/kong-konnect)Kong Konnect
  • [Documentation](https://developer.konghq.com)Documentation
  • [Book Demo](/contact-sales)Book Demo
  1. Home
  2. Blog
  3. Enterprise
  4. PII Sanitization Needed for LLMs and Agentic AI is Now Easier to Build
[Enterprise](/blog/enterprise)Enterprise
April 2, 2025
7 min read

# PII Sanitization Needed for LLMs and Agentic AI is Now Easier to Build

Alex Drag
Head of Product Marketing

## PII sanitization is critical for LLMs and agentic AI use cases. And now there's a more efficient route to build it.

The excitement around large language models (LLMs) and agentic AI is justified. These systems can summarize, generate, reason, and even take actions across APIs — all with minimal human input. However, as enterprises race to integrate LLMs into real-world workflows — especially when those enterprises operate in regulated environments and/or deal in sensitive data — one fundamental question looms large:

**How do you protect personally identifiable information (PII) from being leaked, exposed, or misused by these systems?**

Youtube thumbnail
**This content contains a video which can not be displayed in Agent mode**

## LLMs are powerful, but not inherently privacy-aware

LLMs operate as highly capable, non-deterministic pattern matchers. But they come with two significant privacy challenges:

  • - **They don’t automatically distinguish between sensitive and non-sensitive data**
  • - **They're fundamentally non-forgetful and non-auditable**

If you pass raw user input, internal logs, or structured data directly into an LLM without safeguards, you’re risking the exposure of names, emails, credit cards, health info, and more.

Even more concerning: LLMs can *memorize* and *regurgitate* this data in unrelated contexts, especially if that data appears frequently in your prompts or agent memory.

Imagine a customer’s social security number showing up in a completely different query weeks later. It happens. Or at least it could happen. This potential often acts as an immediate no-go and blocker for any organization that actually wants to roll out production-grade AI services as either consumer-facing products or internal productivity engines.

## This problem must be solved before organizations can fully leverage agentic AI

[Agentic AI](https://konghq.com/blog/learning-center/agentic-ai)Agentic AI — systems that combine LLMs with memory, APIs, and decision-making — introduce even more exposure vectors:

  • - **Tool use**: Agents might query APIs with sensitive parameters.
  • - **Multi-turn interactions**: PII might persist across long sessions.
  • - **Autonomy**: Agents might write logs, store messages, or share info downstream — all without a clear boundary or data contract.

The net result? **You lose control of where PII goes, and you can’t easily trace what the model saw or said.** That’s a compliance and security nightmare for enterprise environments.

## Sanitization is the first line of defense

To safely build with LLMs and agents, **PII sanitization needs to be built into the flow** — not bolted on as an afterthought.

This means intercepting and managing data at the points of entry (requests), generation (responses), and interaction (prompts/memories). You want to ensure that:

  • - **Only safe, redacted data reaches the LLM**
  • - **No sensitive tokens or context leak during generation**
  • - **Downstream consumers and logs are free of raw PII**

PII sanitization isn’t just about masking names. It's about contextual data control across the entire AI interaction surface — especially when that surface is drastically expanded by agentic workflows and interactions.

## How is this being done today?

Today, many teams attempt to manage this risk by building ad hoc solutions — embedding regex-based redaction libraries, relying on prompt engineering best practices, or adding pre- and post-processing layers to scrub sensitive data from inputs and outputs. In some cases, developers hardcode filters or use external data loss prevention (DLP) tools to flag potential leaks. 

While these approaches can be effective in controlled environments, they often lack consistency, observability, and scalability, making it difficult to ensure compliance and maintain trust across dynamic, multi-model architectures. 

This is especially true when organizations are using many different models and have many different clients and consumers who want access to the data in those models. As use cases arise, developers will build and implement another ad-hoc sanitization mechanism. Like the issue with one-off, ad-hoc API authorization, this [results in governance and security nightmares](https://www.linkedin.com/posts/alexdrag_apiplatform-multicloud-activity-7300933030234206208-ctLs?utm_source=share&utm_medium=member_desktop&rcm=ACoAACYKmYsBZxivSum9b2npyzxl_5MWRq_nRhc)results in governance and security nightmares. 

If an organization is interested in implementing a consistent PII sanitization practice that scales, the best way forward is to abstract the actual PII sanitization away from the developers as much as possible. Platform teams should invest in AI infrastructure that enables consistent PII sanitization as a standard policy that can be enforced across any (or potentially every) LLM exposure use case within the organization.

This is where the AI gateway — and Kong’s AI Gateway PII sanitization policy — comes in.

*Want to learn more about moving past the AI experimentation phase and into production-ready AI systems? Check out the upcoming webinar on how to *[*drive real AI value with state-of-the-art AI infrastructure*](https://konghq.com/events/webinars/state-of-the-art-ai-infrastructure)*drive real AI value with state-of-the-art AI infrastructure**.*

## The AI gateway as *scalable* PII leak-proofing

Just as an API gateway manages, secures, and transforms API traffic (and abstracts away the logic required for this from the backend API layer) an AI gateway gives you **control, visibility, and policy enforcement** for LLM traffic — ideally including built-in PII sanitization — and abstracts the PII sanitization logic away from the LLM and/or application layers.

At Kong, we just released a brand new PII sanitization policy that enables just this.

Here’s how it works in practice:

**1. Policy config**: The producer configures the sanitization plugin to automatically sanitize any inbound request of certain types of PII

**2. Inbound**: A client app sends a user request to the AI gateway. The gateway detects and redacts PII (names, emails, etc.) before forwarding it to the LLM.

**3. LLM interaction**: The prompt is processed with sanitized data, ensuring no sensitive info reaches the model.

This makes the AI gateway a trusted policy enforcement point between applications and models. But is it enough?

***Learn more about how to ***[***start sanitizing PII***](https://docs.konghq.com/hub/kong-inc/ai-sanitizer/how-to/)***start sanitizing PII****** using the AI Sanitizer plugin.***

## Building in PII sanitization and AI security at scale with global policies, control plane groups, and APIOps

The reality is that just having an AI gateway with this functionality isn’t enough to enforce proper AI security and PII sanitization at scale.

You must build a platform practice around the other layers of AI security as well. And that means you must combine the power of the AI gateway’s PII sanitization functionality with other layers of protection around content safety, prompt guarding, rate limiting, etc. And then, to drive AI governance and security at scale, you’ll need to combine the power of multi-layer protection with the power that comes from a federated platform approach to provisioning and governing AI gateway infrastructure. 

Kong enables all of this through the unification of the industry’s most robust AI gateway with the platform power of Kong Konnect control plane groups, global policies, and [APIOps](https://konghq.com/blog/enterprise/what-is-apiops)APIOps. How does this work?

We cover the concept of control plane groups in [this video](https://www.linkedin.com/posts/alexdrag_apiplatform-governeverything-activity-7311409676502323202-0Fz7?utm_source=share&utm_medium=member_desktop&rcm=ACoAACYKmYsBZxivSum9b2npyzxl_5MWRq_nRhc)this video, but here’s a quick summary: 

1. Platform owners can create control plane groups within Konnect — typically mapping onto lines of business and/or different development environments

2. Once the control plane group is created, the platform owner can then configure global policies for that group. In this instance, the PII sanitization policy could be enforced as a non-negotiable policy for any AI Gateway infrastructure that falls under this group.

3. Now, any time somebody from this specific team spins up Gateway infrastructure for their LLM exposure use cases, that PII sanitization policy is automatically configured and enforced.

Notice what this approach does. Yes, the AI Gateway is abstracting away the PII sanitization logic from the LLM or client app layers, as already mentioned. But, with the larger platform in place, platform owners can also abstract away the actual configuration of the PII sanitization policy from the developer — which both lowers the possibility of human error upon policy config and removes yet another task from the developer’s workflow, enabling them to focus on building core AI functionality instead of security logic on top of that functionality.

**One thing to note**: The process above was manual and “click-ops” oriented. But, like everything we do here at Kong, we believe the best practice is to enforce best practices such as these via automation and APIOps, ultimately enabling an “AI governance as code” program that leaves as little room for human error as humanly (or machine-ly?) possible. 

Kong makes APIOps simple, with support for:

  • - Imperative configuration via our fully-documented Admin API
  • - Declarative configuration (for non-Kubernetes teams) via our decK CLI tool and/or Terraform provider
  • - Declarative configuration for Kubernetes teams with Gateway Operator
**This content contains a video which can not be displayed in Agent mode**

## Final thoughts and how to sell this to your boss: The business value and impact

Oftentimes, when we talk about APIs, gateways, gateway policies, etc., conversations can end up getting into the technical weeds. 

However, as necessary as these technical weeds are, the conversations shouldn’t start or end there. 

The organizations that are finding the most AI and API success are the organizations that start and end from a place of thinking about AI and API platform strategy from a business value point of view. And this makes sense, as the API really is the hero of the AI story. And AI is the hero of many organization’s innovation and disruption stories.

PII sanitization as a practice belongs in the realm of the business impact discussion. So, if you find yourself in a room with business leadership and you’re trying to make sure that they understand the business value of implementing the Kong API Platform for AI make sure they're aware of just how critical AI governance and PII sanitization is for:

  • - **Compliance by default**: automatically bake-in GDPR, HIPAA, etc. compliance into every AI, LLM, and agentic workflow
  • - **Trust and brand reputation**: from users, customers, and internal stakeholders
  • - **Cleaner data and innovation**: the easier it is to ensure clean data, the safer it is to use that data for training, auditing, and reuse in larger AI innovation practices
  • - **Faster time to market**: by abstracting and automating away today’s manual approaches to AI compliance and PII sanitization, it’s easier to drive greater AI adoption and, ultimately, start shipping innovation to market faster

LLMs and agentic AI aren't inherently safe for sensitive data — but they can be. With the right infrastructure patterns, like an AI Gateway with built-in PII sanitization as a part of a larger API Platform practice, you can unlock the power of these systems **without compromising trust, compliance, or safety**.

You're building the future of AI. Just make sure you're building it responsibly. If you want help, [just let us know](https://konghq.com/contact-sales)just let us know. 

## AI-powered API security? Yes please!

[Learn More](/products/kong-ai-gateway/)Learn More[Get a Demo](/contact-sales)Get a Demo
- [AI](/blog/tag/ai)AI- [AI Gateway](/blog/tag/ai-gateway)AI Gateway- [Governance](/blog/tag/governance)Governance- [LLM](/blog/tag/llm)LLM- [AI Security](/blog/tag/ai-security)AI Security

## More on this topic

_Reports_

## Agentic AI in the Enterprise: Adoption, Governance, and Barriers

_Demos_

## Securing Enterprise LLM Deployments: Best Practices and Implementation

## See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

[Get a Demo](/contact-sales)Get a Demo
**Topics**
- [AI](/blog/tag/ai)AI- [AI Gateway](/blog/tag/ai-gateway)AI Gateway- [Governance](/blog/tag/governance)Governance- [LLM](/blog/tag/llm)LLM- [AI Security](/blog/tag/ai-security)AI Security
Alex Drag
Head of Product Marketing

Recommended posts

# LLM Cost Management: How to Implement AI Showback and Chargeback

[Enterprise](/blog)EnterpriseApril 6, 2026

Bring Financial Accountability to Enterprise LLM Usage with Konnect Metering and Billing Showback and chargeback are not the same thing. Most organizations conflate these two concepts, and that conflation delays action. Understanding the LLM showb

Alex Drag
[](https://konghq.com/blog/enterprise/llm-cost-management-ai-showback-and-chargeback)

# AI Guardrails: Ensure Safe, Responsible, Cost-Effective AI Integration

[Engineering](/blog)EngineeringAugust 25, 2025

Why AI guardrails matter It's natural to consider the necessity of guardrails for your sophisticated AI implementations. The truth is, much like any powerful technology, AI requires a set of protective measures to ensure its reliability and integrit

Jason Matis
[](https://konghq.com/blog/engineering/ai-guardrails)

# Consistently Hallucination-Proof Your LLMs with Automated RAG

[Enterprise](/blog)EnterpriseApril 2, 2025

AI is quickly transforming the way businesses operate, turning what was once futuristic into everyday reality. However, we're still in the early innings of AI, and there are still several key limitations with AI that organizations should remain awa

Adam Jiroun
[](https://konghq.com/blog/enterprise/automated-rag-hallucination-proof-llms)

# AI Input vs. Output: Why Token Direction Matters for AI Cost Management

[Enterprise](/blog)EnterpriseMarch 10, 2026

The Shifting Economic Landscape: The AI token economy in 2026 is evolving, and enterprise leaders must distinguish between low-cost input tokens and high-premium output tokens to maintain profitability. Agentic AI Financial Risks: The transition t

Dan Temkin
[](https://konghq.com/blog/enterprise/ai-input-vs-output-cost-management)

# Building the Agentic AI Developer Platform: A 5-Pillar Framework

[Enterprise](/blog)EnterpriseJanuary 15, 2026

The first pillar is enablement. Developers need tools that reduce friction when building AI-powered applications and agents. This means providing: Native MCP support for connecting agents to enterprise tools and data sources SDKs and frameworks op

Alex Drag
[](https://konghq.com/blog/enterprise/agentic-ai-developer-platform)

# The AI Governance Wake-Up Call

[Enterprise](/blog)EnterpriseDecember 12, 2025

Companies are charging headfirst into AI, with research around agentic AI in the enterprise finding as many as 9 out of 10 organizations are actively working to adopt AI agents.  LLMs are being deployed, agentic workflows are getting created left

Taylor Hendricks
[](https://konghq.com/blog/enterprise/ai-governance-wake-up-call)

# Securing Enterprise AI: OWASP Top 10 LLM Vulnerabilities Guide

[Engineering](/blog)EngineeringJuly 31, 2025

Introduction to OWASP Top 10 for LLM Applications 2025 The OWASP Top 10 for LLM Applications 2025 represents a significant evolution in AI security guidance, reflecting the rapid maturation of enterprise AI deployments over the past year. The key up

Michael Field
[](https://konghq.com/blog/engineering/owasp-top-10-ai-and-llm-guide)

## Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

[Get a Demo](/contact-sales)Get a Demo
Ask AI for a summary of Kong
  • [](https://chatgpt.com/s/t_69b981cfa37081919ce25ce107c431c1)
  • [](https://share.google/aimode/hyefOiNwl8pg8W99d)
  • [](https://www.perplexity.ai/search/what-solutions-does-kong-offer-VsYWPddxQjajgvLA4B9hjQ)
Stay connected

## step-0

    • Company
    • [About Kong](/company/about-us)About Kong
    • [Customers](/customer-stories)Customers
    • [Careers](/company/careers)Careers
    • [Press](/company/press-room)Press
    • [Events](/events)Events
    • [Contact](/company/contact-us)Contact
    • [Pricing](/pricing)Pricing
    • Legal
    • [Terms](/legal/terms-of-use)Terms
    • [Privacy](/legal/privacy-policy)Privacy
    • [Trust and Compliance](https://trust.konghq.com)Trust and Compliance
    • Platform
    • [Kong AI Gateway](/products/kong-ai-gateway)Kong AI Gateway
    • [Kong Konnect](/products/kong-konnect)Kong Konnect
    • [Kong Gateway](/products/kong-gateway)Kong Gateway
    • [Kong Event Gateway](/products/event-gateway)Kong Event Gateway
    • [Kong Insomnia](/products/kong-insomnia)Kong Insomnia
    • [Documentation](https://developer.konghq.com)Documentation
    • [Book Demo](/contact-sales)Book Demo
    • Compare
    • [AI Gateway Alternatives](/performance-comparison/ai-gateway-alternatives)AI Gateway Alternatives
    • [Kong vs Apigee](/performance-comparison/kong-vs-apigee)Kong vs Apigee
    • [Kong vs IBM](/performance-comparison/ibm-api-connect-vs-kong)Kong vs IBM
    • [Kong vs Postman](/performance-comparison/kong-vs-postman)Kong vs Postman
    • [Kong vs Mulesoft](/performance-comparison/kong-vs-mulesoft)Kong vs Mulesoft
    • Explore More
    • [Open Banking API Solutions](/solutions/open-banking)Open Banking API Solutions
    • [API Governance Solutions](/solutions/api-governance)API Governance Solutions
    • [Istio API Gateway Integration](/solutions/istio-gateway)Istio API Gateway Integration
    • [Kubernetes API Management](/solutions/build-on-kubernetes)Kubernetes API Management
    • [API Gateway: Build vs Buy](/campaign/secure-api-scalability)API Gateway: Build vs Buy
    • [Kong vs Apigee](/performance-comparison/kong-vs-apigee)Kong vs Apigee
    • Open Source
    • [Kong Gateway](https://developer.konghq.com/gateway/install/)Kong Gateway
    • [Kuma](https://kuma.io/)Kuma
    • [Insomnia](https://insomnia.rest/)Insomnia
    • [Kong Community](/community)Kong Community

Kong enables the connectivity layer for the agentic era – securely connecting, governing, and monetizing APIs and AI tokens across any model or cloud.

  • Japanese
  • Frenchcoming soon
  • Spanishcoming soon
  • Germancoming soon
© Kong Inc. 2026
Interaction mode