REGISTER NOW FOR API + AI SUMMIT 2026 | EARLY BIRD PRICING ENDS JUNE 10 SECURE YOUR SPOT FOR THE KONG AGENTIC ERA WORLD TOUR GOVERN A2A TRAFFIC WITH KONG'S NEW AGENT GATEWAY DON’T MISS API + AI SUMMIT 2026 SEPT 30 – OCT 1
  • [Why Kong ](/company/why-kong)Why Kong
  • _AI CONNECTIVITY TECHNOLOGIES_
    The Unified API and AI Platform
    []
    • API Management
    • AI Management
    • Event Management
    • Monetization
    • RUNTIMES
    • [API Gateway ](/products/kong-gateway)API Gateway
    • [AI Gateway ](/products/kong-ai-gateway)AI Gateway
    • [Event Gateway ](/products/event-gateway)Event Gateway
    • [Service Mesh ](/products/kong-mesh)Service Mesh
    • [Context Mesh ](/products/kong-konnect/features/context-mesh)Context Mesh
    • [Ingress Controller ](/products/kong-ingress-controller)Ingress Controller
    • [Kong Operator ](/products/kong-operator)Kong Operator
    • CORE SERVICES
    • [MCP Registry ](/products/mcp-registry)MCP Registry
    • [API Service Catalog ](/products/kong-konnect/features/api-service-catalog)API Service Catalog
    • [Runtime Management ](/products/kong-konnect/features/runtime-management)Runtime Management
    • [APIOps & Automation ](/products/apiops-automation)APIOps & Automation
    • APPS & AI AGENTS
    • [Developer Portal ](/products/kong-konnect/features/developer-portal)Developer Portal
    • [Usage Billing & Metering ](/products/kong-konnect/features/usage-based-metering-and-billing)Usage Billing & Metering
    • [Observability ](/products/kong-konnect/features/api-observability)Observability
    • [KAi Agent ](/products/kong-konnect/features/kai-ai-agent)KAi Agent
    DEVELOPER TOOLS
    [Insomnia ](https://insomnia.rest/)Insomnia [Plugins ](https://developer.konghq.com/plugins/)Plugins [Volcano ](https://volcano.dev/)Volcano [Documentation ](https://docs.konghq.com/)Documentation [Open Source ](/community)Open Source
      • FOR PLATFORM TEAMS
      • [Developer Platform ](/solutions/building-developer-platform)Developer Platform
      • [Kubernetes and Microservices ](/solutions/build-on-kubernetes)Kubernetes and Microservices
      • [Observability ](/solutions/observability)Observability
      • [Service Mesh Connectivity ](/solutions/service-mesh-connectivity)Service Mesh Connectivity
      • [Kafka Event Streaming ](/solutions/kafka-stream-api-management)Kafka Event Streaming
      • FOR EXECUTIVES
      • [AI Connectivity ](/ai-connectivity)AI Connectivity
      • [Open Banking ](/solutions/open-banking)Open Banking
      • [Legacy Migration ](/solutions/legacy-api-management-migration)Legacy Migration
      • [Platform Cost Reduction ](/solutions/api-platform-consolidation)Platform Cost Reduction
      • [Kafka Cost Optimization ](/solutions/reduce-kafka-cost)Kafka Cost Optimization
      • [API Monetization ](/solutions/api-monetization)API Monetization
      • [AI Monetization ](/solutions/ai-monetization)AI Monetization
      • [AI FinOps ](/solutions/ai-cost-governance-finops)AI FinOps
      • FOR AI TEAMS
      • [Agent Gateway ](/agent-gateway)Agent Gateway
      • [AI Governance ](/solutions/ai-governance)AI Governance
      • [AI Security ](/solutions/ai-security)AI Security
      • [AI Cost Control ](/solutions/ai-cost-optimization-management)AI Cost Control
      • [Agentic Infrastructure ](/solutions/agentic-ai-workflows)Agentic Infrastructure
      • [MCP Production ](/solutions/mcp-production-and-consumption)MCP Production
      • [MCP Traffic Gateway ](/solutions/mcp-governance)MCP Traffic Gateway
      • FOR DEVELOPERS
      • [Mobile App API Development ](/solutions/mobile-application-api-development)Mobile App API Development
      • [GenAI App Development ](/solutions/power-openai-applications)GenAI App Development
      • [API Gateway for Istio ](/solutions/istio-gateway)API Gateway for Istio
      • [Decentralized Load Balancing ](/solutions/decentralized-load-balancing)Decentralized Load Balancing
      • BY INDUSTRY
      • [Financial Services ](/solutions/financial-services-industry)Financial Services
      • [Healthcare ](/solutions/healthcare)Healthcare
      • [Higher Education ](/solutions/api-platform-for-education-services)Higher Education
      • [Insurance ](/solutions/insurance)Insurance
      • [Manufacturing ](/solutions/manufacturing)Manufacturing
      • [Retail ](/solutions/retail)Retail
      • [Software & Technology ](/solutions/software-and-technology)Software & Technology
      • [Transportation ](/solutions/transportation-and-logistics)Transportation
  • [Pricing ](/pricing)Pricing
      • DOCUMENTATION
      • [Kong Konnect ](https://developer.konghq.com/konnect/)Kong Konnect
      • [Kong Gateway ](https://developer.konghq.com/gateway/)Kong Gateway
      • [Kong Mesh ](https://developer.konghq.com/mesh/)Kong Mesh
      • [Kong AI Gateway ](https://developer.konghq.com/ai-gateway/)Kong AI Gateway
      • [Kong Event Gateway ](https://developer.konghq.com/event-gateway/)Kong Event Gateway
      • [Kong Insomnia ](https://developer.konghq.com/insomnia/)Kong Insomnia
      • [Plugin Hub ](https://developer.konghq.com/plugins/)Plugin Hub
      • EXPLORE
      • [Blog ](/blog)Blog
      • [Learning Center ](/blog/learning-center)Learning Center
      • [eBooks ](/resources/e-book)eBooks
      • [Reports ](/resources/reports)Reports
      • [Demos ](/resources/demos)Demos
      • [Customer Stories ](/customer-stories)Customer Stories
      • [Videos ](/resources/videos)Videos
      • EVENTS
      • [API + AI Summit ](/events/conferences/api-ai-summit)API + AI Summit
      • [Agentic Era World Tour ](/agentic-era-world-tour)Agentic Era World Tour
      • [Webinars ](/events/webinars)Webinars
      • [User Calls ](/events/user-calls)User Calls
      • [Workshops ](/events/workshops)Workshops
      • [Meetups ](/events/meetups)Meetups
      • [See All Events ](/events)See All Events
      • FOR DEVELOPERS
      • [Get Started ](https://developer.konghq.com/)Get Started
      • [Community ](/community)Community
      • [Certification ](/academy/certification)Certification
      • [Training ](https://education.konghq.com)Training
      • COMPANY
      • [About Us ](/company/about-us)About Us
      • [We're Hiring! ](/company/careers)We're Hiring!
      • [Press Room ](/company/press-room)Press Room
      • [Contact Us ](/company/contact-us)Contact Us
      • [Kong Partner Program ](/partners)Kong Partner Program
      • [Enterprise Support Portal ](https://support.konghq.com/s/)Enterprise Support Portal
      • [Documentation ](https://developer.konghq.com/?_gl=1*tphanb*_gcl_au*MTcxNTQ5NjQ0MC4xNzY5Nzg4MDY0LjIwMTI3NzEwOTEuMTc3MzMxODI2MS4xNzczMzE4MjYw*_ga*NDIwMDU4MTU3LjE3Njk3ODgwNjQ.*_ga_4JK9146J1H*czE3NzQwMjg1MjkkbzE4OSRnMCR0MTc3NDAyODUyOSRqNjAkbDAkaDA)Documentation
  • [](/search)
  • [Login](https://cloud.konghq.com/login)Login
  • [Book Demo](/contact-sales)Book Demo
  • [Get Started](/products/kong-konnect/register)Get Started
[Blog](/blog)Blog
  • [AI Gateway ](/blog/tag/ai-gateway)AI Gateway
  • [AI Security ](/blog/tag/ai-security)AI Security
  • [AIOps ](/blog/tag/aiops)AIOps
  • [API Security ](/blog/tag/api-security)API Security
  • [API Gateway ](/blog/tag/api-gateway)API Gateway
|
    • [API Management ](/blog/tag/api-management)API Management
    • [API Development ](/blog/tag/api-development)API Development
    • [API Design ](/blog/tag/api-design)API Design
    • [Automation ](/blog/tag/automation)Automation
    • [Service Mesh ](/blog/tag/service-mesh)Service Mesh
    • [Insomnia ](/blog/tag/insomnia)Insomnia
    • [Event Gateway ](/blog/tag/event-gateway)Event Gateway
    • [View All Blogs ](/blog/page/1)View All Blogs
We're Entering the Age of AI Connectivity [Read more](/blog/news/the-age-of-ai-connectivity)Read moreProducts & Agents:
    • [Kong AI Gateway](/products/kong-ai-gateway)Kong AI Gateway
    • [Kong API Gateway](/products/kong-gateway)Kong API Gateway
    • [Kong Event Gateway](/products/event-gateway)Kong Event Gateway
    • [Kong Metering & Billing](/products/usage-based-metering-and-billing)Kong Metering & Billing
    • [Kong Insomnia](/products/kong-insomnia)Kong Insomnia
    • [Kong Konnect](/products/kong-konnect)Kong Konnect
  • [Documentation](https://developer.konghq.com)Documentation
  • [Book Demo](/contact-sales)Book Demo
  1. Home
  2. Blog
  3. Learning Center
  4. What Is an MCP Server? Guide to the Model Context Protocol for Enterprise AI
[MCP](/blog/mcp)MCP
May 8, 2026
8 min read

# What Is an MCP Server? Guide to the Model Context Protocol for Enterprise AI

Architecture, Use Cases, and How to Get Started

Kong

Key takeaways

  • An MCP server is a lightweight process that exposes tools, resources, and prompts to AI applications over a standardized protocol.
  • It acts as the bridge between an LLM-powered client and an external system — a database, API, file system, or SaaS product.
  • The protocol standardizes how AI apps discover and invoke capabilities, replacing per-integration custom code.
  • MCP servers communicate with MCP clients over a defined transport layer (stdio or HTTP+SSE), not directly with the LLM.
  • A single MCP server works well for development; production environments with many servers introduce governance, auth, and routing challenges that require additional infrastructure.

What is an MCP server?

You have an AI-powered application that needs to query a database, create a GitHub issue, or pull data from a SaaS tool. Before November 2024, you wrote custom glue code for each integration — one-off adapters with their own schemas, auth flows, and error handling. Every new external system meant another bespoke connector.

An MCP server eliminates that pattern. It is a process that exposes capabilities — tools, resources, and prompts — to AI applications using the [Model Context Protocol (MCP)](https://konghq.com/blog/learning-center/what-is-mcp)Model Context Protocol (MCP), an [open standard](https://www.infoq.com/news/2024/12/anthropic-model-context-protocol/)open standard [introduced by Anthropic in November 2024](https://www.anthropic.com/news/model-context-protocol)introduced by Anthropic in November 2024.

A critical distinction: MCP and MCP server are not synonyms. MCP is the protocol specification — the set of rules governing how AI applications discover and invoke external capabilities. An MCP server is a process that implements the server side of that specification. The relationship is the same as HTTP and a web server: HTTP defines the protocol, Apache or Nginx implements it.

What MCP standardizes is discovery and invocation. Instead of reading API docs, writing integration code, and maintaining per-service adapters, an AI application connects to an MCP server and learns what it can do through capability negotiation. The server declares its tools, resources, and prompts. The client understands how to call them. The protocol handles the rest.

This matters because AI applications need to interact with dozens or hundreds of external systems. Without a standard, each integration is a maintenance burden. With MCP, the integration surface is consistent: one protocol, one discovery mechanism, one invocation pattern.


MCP architecture: how servers fit in

MCP defines four components, each with a distinct role:

  • Host: The AI application the user interacts with — Claude Desktop, an IDE with AI features, or a custom agent application.
  • Client: A protocol connector that lives inside the host. It manages the connection to a specific MCP server.
  • Server: The process that exposes capabilities. It runs locally or remotely and responds to client requests.
  • Transport: The communication layer between client and server — stdio for local processes, HTTP with Server-Sent Events (SSE) for remote connections.

Here is how a request flows through the system:

  1. The host starts one or more MCP clients, each configured to connect to a specific server.
  2. Each client establishes a connection to its assigned server over the chosen transport.
  3. Capability negotiation occurs: the server declares what tools, resources, and prompts it exposes, and the client and server agree on supported protocol features.
  4. The user interacts with the host. The LLM determines it needs to invoke an external tool — for example, querying a database.
  5. The client sends a [JSON-RPC request](https://www.jsonrpc.org/specification)JSON-RPC request to the server, specifying the tool name and parameters.
  6. The server executes the operation, and returns the result to the client, which passes it back to the host and the LLM.

One detail engineers often miss: the server never talks directly to the LLM. The client mediates all communication. The LLM decides it needs a tool, the host tells the client, and the client talks to the server. This separation keeps the protocol clean and the server implementation simple — it does not need to understand LLM internals.

Each client-server connection is a 1:1 pairing. If a host needs to interact with five external systems, it runs five clients, each connected to one server. The host orchestrates across all of them.

For a detailed reference, the MCP specification and architecture documentation are available at [modelcontextprotocol.io](https://modelcontextprotocol.io/specification/2025-06-18)modelcontextprotocol.io.


What an MCP server exposes

An MCP server declares three types of capabilities, each with a different control boundary. If you want to go deeper and [build an MCP server](https://konghq.com/blog/engineering/mcp-servers-guide)build an MCP server yourself, Kong's developer guide walks through the full implementation.

Tools (model-controlled)

Tools are functions the LLM can invoke autonomously during a conversation. The server describes each tool — its name, parameters, and schema — and the LLM decides when to call it based on context.

Examples:

  • query_database(sql) — Execute a SQL query against a connected database.
  • create_github_issue(title, body) — Open a new issue in a GitHub repository.
  • send_slack_message(channel, text) — Post a message to a Slack channel.

Resources (application-controlled)

Resources are data the host application can fetch to provide context to the LLM. Unlike tools, the LLM does not decide when to retrieve resources — the host does.

Examples:

  • File contents from a local or remote file system.
  • Database schemas describing table structures and relationships.
  • API documentation for a connected service.

Prompts (user-controlled)

Prompts are predefined templates that users explicitly select. They provide structured ways to interact with the server's capabilities.

Examples:

  • "Summarize this table" — A prompt template that takes a table name and generates a summary.
  • "Explain this table's relationships" — A prompt that describes foreign keys and joins.

Why three types matter

The distinction is not arbitrary. Each type maps to a different control boundary, which directly affects authorization policy.

Tools let the LLM trigger actions autonomously — creating records, sending messages, modifying state. These require the strictest authorization controls because a misconfigured tool could let an LLM execute operations the user never intended.

Resources are read-only context, fetched by the application on behalf of the user. The risk profile is lower, but access still needs governance — not every user should see every database schema.

Prompts are user-initiated and explicit. The user chooses to run them, so the control model is straightforward.

A concrete example ties this together. A PostgreSQL MCP server might expose:

  • Tools: query(sql), insert(table, data), update(table, conditions, data)
  • Resources: Table schemas, column types, index definitions
  • Prompts: "Explain this table's relationships," "Generate a migration for this schema change"

One server, three capability types, three control boundaries.

MCP server vs. REST API

MCP servers do not replace REST APIs. Many MCP servers wrap existing REST APIs, acting as a bridge that makes them consumable by AI applications.

The differences are structural:

Rest API vs MCP Server.

The practical insight: if you already have REST APIs, you do not rebuild them as MCP servers. You put an MCP server in front of them. The MCP server translates capability negotiation and tool invocation into REST calls against your existing services.

This bridging pattern is how most production MCP deployments work. Your REST APIs remain the system of record. The MCP server is the layer that makes them accessible to AI applications through a standard protocol.

When you need more than one MCP server

Running a single MCP server during development is straightforward. You connect Claude Desktop or Cursor to a local server, test your tools, and iterate. The experience is simple and self-contained.

Production is a different problem. A real deployment might involve dozens of MCP servers — one for each database, SaaS platform, internal service, and third-party API your AI applications need to reach. Each server has its own authentication requirements, rate limits, and access policies.

At this scale, several problems compound:

  • Authentication sprawl: Each MCP server manages its own credentials. There is no centralized identity layer, so every server-client connection requires separate auth configuration. Credential rotation becomes a per-server operational task.
  • No fleet-wide observability: You cannot see which agents are calling which tools, how often, or with what error rates — unless you instrument each server individually.
  • Governance gaps: Who authorized this agent to access production databases? Which teams have access to which MCP servers? Without centralized policy, these questions have no consistent answer.
  • Discovery does not scale manually: With five servers, developers can hardcode connection details. With fifty, you need a registry. With hundreds, you need automated discovery.

A pattern emerges here — the same pattern the API industry identified and solved over the past decade. When you have many services behind many endpoints with different auth and different policies, you put a [gateway](https://konghq.com/blog/learning-center/what-is-a-mcp-gateway)gateway in front of them. Centralized authentication, rate limiting, observability, and routing through a single control plane.

Kong builds infrastructure for exactly this problem. [Kong AI Gateway](https://konghq.com/blog/product-releases/enterprise-mcp-gateway)Kong AI Gateway provides centralized authentication, rate limiting, [observability, and governance](https://konghq.com/blog/product-releases/securing-observing-governing-mcp-servers-with-ai-gateway)observability, and governance across MCP servers — the same capabilities Kong has delivered for REST APIs at scale. [Kong MCP Registry](https://developer.konghq.com/konnect-platform/konnect-mcp/)Kong MCP Registry gives agents a centralized catalog to discover approved MCP servers at runtime, replacing hardcoded connections with dynamic, policy-governed discovery. And Kong's MCP Client in Insomnia lets teams test and validate MCP servers before agents consume them in production. For a deeper look at how these components work together, see the [AI Gateway, MCP Gateway, and MCP Server](https://konghq.com/blog/engineering/ai-gateway-mcp-gateway-mcp-server-breakdown)AI Gateway, MCP Gateway, and MCP Server technical breakdown.

This is not a new category of problem. It is the API gateway pattern applied to AI infrastructure. The difference is the consumer: instead of applications calling APIs, agents call MCP servers. The governance requirements — auth, rate limiting, observability, access control — are the same.


If you are evaluating MCP for production, explore how Kong governs MCP servers at scale. See the [MCP Gateway report](https://konghq.com/resources/reports/mcp-gateway)MCP Gateway report or learn how [Kong Konnect](https://konghq.com/products/kong-konnect/agents)Kong Konnect provides centralized control for agentic AI infrastructure. Ready to try it? [Get started with Kong MCP](https://developer.konghq.com/mcp/kong-mcp/get-started/)Get started with Kong MCP or [explore the available tools](https://developer.konghq.com/mcp/kong-mcp/tools/)explore the available tools. [Request a demo](https://konghq.com/contact-sales)Request a demo to see how it works in your environment.


MCP Server FAQS

What exactly is an MCP server? An MCP server is a process that exposes tools, resources, and prompts to AI applications using the Model Context Protocol. It acts as the interface between an LLM-powered client and an external system like a database, API, or SaaS product.

Why would I need an MCP server? When you want AI applications to interact with external systems through a standard interface instead of writing custom integration code for each connection. MCP servers replace per-service glue code with a single, consistent protocol.

What is the difference between a REST API and an MCP server? A REST API uses custom schemas for application-to-application communication. An MCP server provides a standardized, self-describing interface designed for AI applications, with built-in capability negotiation so clients automatically discover available tools and resources.

Does ChatGPT use MCP? [OpenAI announced MCP support for ChatGPT in March 2025](https://openai.com/index/new-tools-and-features-in-the-responses-api/)OpenAI announced MCP support for ChatGPT in March 2025. Other platforms including Claude, Cursor, and Windsurf also support MCP as a standard protocol for connecting AI applications to external tools.

How is MCP different from tool calling? Tool calling is an LLM feature that lets the model request execution of a specific function. MCP is the standardized protocol that defines how those function calls reach external systems, how capabilities are discovered, and how results are returned. Tool calling is what the LLM does; MCP is how the call gets to the server and back.

- [MCP](/blog/tag/mcp)MCP- [AI Connectivity](/blog/tag/ai-connectivity)AI Connectivity- [Agentic AI](/blog/tag/agentic-ai)Agentic AI- [AI Gateway](/blog/tag/ai-gateway)AI Gateway

## More on this topic

_eBooks_

## The AI Connectivity Playbook: How to Build, Govern & Scale AI

_Videos_

## MCP vs OpenAPI vs A2A vs ?: Preparing for the Agentic World

## See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

[Get a Demo](/contact-sales)Get a Demo
**Topics**
- [MCP](/blog/tag/mcp)MCP- [AI Connectivity](/blog/tag/ai-connectivity)AI Connectivity- [Agentic AI](/blog/tag/agentic-ai)Agentic AI- [AI Gateway](/blog/tag/ai-gateway)AI Gateway
Kong

Recommended posts

# What is a MCP Gateway? The Missing Piece for Enterprise AI Infrastructure

[Learning Center](/blog/tag)Learning CenterFebruary 16, 2026

AI agents are spreading across organizations rapidly. Each agent needs secure access to different Model Context Protocol (MCP) servers. Authentication becomes complex. Scaling creates bottlenecks. The dreaded "too many endpoints" problem emerges.

Kong
[](https://konghq.com/blog/learning-center/what-is-a-mcp-gateway)

# AI Agent Integration: Gartner Research Confirms Need for AI Control Layer

[Enterprise](/blog/tag)EnterpriseMay 8, 2026

An AI control layer is the governance and observability infrastructure that sits between AI agents and enterprise applications, handling authentication, routing, rate limiting, and auditability to ensure secure, managed access. Unlike traditional in

Heather Halenbeck
[](https://konghq.com/blog/enterprise/ai-agent-integration-gartner-ai-control-layer)

# From Microservices to AI Traffic — Kong as the Unified Control Plane

[Enterprise](/blog/tag)EnterpriseMarch 30, 2026

The Anatomy of Architectural Complexity Modern architectures now juggle three distinct traffic patterns. Each brings unique demands. Traditional approaches treat them separately. This separation creates unnecessary complexity. North-South API Traf

Kong
[](https://konghq.com/blog/enterprise/microservices-to-ai-traffic-kong-as-the-unified-control-plane)

# Managing the Chaos: How AI Gateways Enable Scalable AI Connectivity

[Enterprise](/blog/tag)EnterpriseMarch 16, 2026

Executive Summary AI adoption has moved past the "honeymoon phase" and into the "operational chaos" phase. As enterprises juggle multiple LLM providers, skyrocketing token costs, and "Shadow AI" usage, the need for a centralized control plane has be

Kong
[](https://konghq.com/blog/enterprise/ai-gateways-for-scalable-ai-connectivity)

# The Platform Enterprises Need to Compete? Kong Already Built It

[Enterprise](/blog/tag)EnterpriseFebruary 25, 2026

A Response to Gartner’s Latest Research We have crossed a threshold in the AI economy where the competitive advantage is no longer about access to data — it’s about access to context. The "context economy" has arrived, defined by a fundamental

Alex Drag
[](https://konghq.com/blog/enterprise/the-platform-enterprises-need-to-compete)

# LiteLLM vs Kong: Choosing the Right Enterprise AI Gateway for Production

[Enterprise](/blog/tag)EnterpriseMay 7, 2026

For many buyers, this is where the evaluation begins: the part of the stack responsible for controlling, shaping, and observing AI traffic as it moves between applications and AI models. Once the baseline requirements are met, the question then shif

Adam Jiroun
[](https://konghq.com/blog/enterprise/kong-ai-gateway-vs-litellm)

# Building a Secure, Scalable AI Infrastructure with Kong and Akamai: A Technical Introduction

[Engineering](/blog/tag)EngineeringMay 4, 2026

Together, the following components represent the three layers of the new AI platform: AI Gateway: Kong AI Gateway (including MCP support) controls both GenAI and MCP flow and orchestrates the existing services like Vector Databases, Event Streaming,

Marco Raffaelli
[](https://konghq.com/blog/engineering/ai-infrastructure-with-kong-and-akamai)

## Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

[Get a Demo](/contact-sales)Get a Demo

## step-0

    • Company
    • [About Kong ](/company/about-us)About Kong
    • [Customers ](/customer-stories)Customers
    • [Careers ](/company/careers)Careers
    • [Press ](/company/press-room)Press
    • [Events ](/events)Events
    • [Contact ](/company/contact-us)Contact
    • [Pricing ](/pricing)Pricing
      •    * [Terms](/legal/terms-of-use)
      •    * [Privacy](/legal/privacy-policy)
      •    * [Trust and Compliance](https://trust.konghq.com/)
    • Platform
    • [Kong AI Gateway ](/products/kong-ai-gateway)Kong AI Gateway
    • [Kong Konnect ](/products/kong-konnect)Kong Konnect
    • [Kong Gateway ](/products/kong-gateway)Kong Gateway
    • [Kong Event Gateway ](/products/event-gateway)Kong Event Gateway
    • [Kong Insomnia ](/products/kong-insomnia)Kong Insomnia
    • [Documentation ](https://developer.konghq.com)Documentation
    • [Book Demo ](/contact-sales)Book Demo
    • Compare
    • [AI Gateway Alternatives ](/performance-comparison/ai-gateway-alternatives)AI Gateway Alternatives
    • [Kong vs Apigee ](/performance-comparison/kong-vs-apigee)Kong vs Apigee
    • [Kong vs IBM ](/performance-comparison/ibm-api-connect-vs-kong)Kong vs IBM
    • [Kong vs Postman ](/performance-comparison/kong-vs-postman)Kong vs Postman
    • [Kong vs Mulesoft ](/performance-comparison/kong-vs-mulesoft)Kong vs Mulesoft
    • Explore More
    • [Open Banking API Solutions ](/solutions/open-banking)Open Banking API Solutions
    • [API Governance Solutions ](/solutions/api-governance)API Governance Solutions
    • [Istio API Gateway Integration ](/solutions/istio-gateway)Istio API Gateway Integration
    • [Kubernetes API Management ](/solutions/build-on-kubernetes)Kubernetes API Management
    • [API Gateway: Build vs Buy ](/campaign/secure-api-scalability)API Gateway: Build vs Buy
    • [Kong vs Apigee ](/performance-comparison/kong-vs-apigee)Kong vs Apigee
    • Open Source
    • [Kong Gateway ](https://developer.konghq.com/gateway/install/)Kong Gateway
    • [Kuma ](https://kuma.io/)Kuma
    • [Insomnia ](https://insomnia.rest/)Insomnia
    • [Kong Community ](/community)Kong Community

Kong enables the connectivity layer for the agentic era – securely connecting, governing, and monetizing APIs and AI tokens across any model or cloud.

  • English
  • Japanese
  • Frenchcoming soon
  • Spanishcoming soon
  • Germancoming soon
Everything is 200 OK
© Kong Inc. 2026
Interaction mode