WHY GARTNER’S “CONTEXT MESH” CHANGES EVERYTHING AI CONNECTIVITY: THE ROAD AHEAD DON’T MISS API + AI SUMMIT 2026 SEPT 30 – OCT 1
  • [Why Kong](/company/why-kong)Why Kong
    • Explore the unified API Platform
        • BUILD APIs
        • [
          Kong Insomnia](/products/kong-insomnia)
          Kong Insomnia
        • [
          API Design](/products/kong-insomnia/api-design)
          API Design
        • [
          API Mocking](/products/kong-insomnia/api-mocking)
          API Mocking
        • [
          API Testing and Debugging](/products/kong-insomnia/api-testing-and-debugging)
          API Testing and Debugging
        • [
          MCP Client](/products/kong-insomnia/mcp-client)
          MCP Client
        • RUN APIs
        • [
          API Gateway](/products/kong-gateway)
          API Gateway
        • [
          Context Mesh](/products/kong-konnect/features/context-mesh)
          Context Mesh
        • [
          AI Gateway](/products/kong-ai-gateway)
          AI Gateway
        • [
          Event Gateway](/products/event-gateway)
          Event Gateway
        • [
          Kubernetes Operator](/products/kong-gateway-operator)
          Kubernetes Operator
        • [
          Service Mesh](/products/kong-mesh)
          Service Mesh
        • [
          Ingress Controller](/products/kong-ingress-controller)
          Ingress Controller
        • [
          Runtime Management](/products/kong-konnect/features/runtime-management)
          Runtime Management
        • DISCOVER APIs
        • [
          Developer Portal](/products/kong-konnect/features/developer-portal)
          Developer Portal
        • [
          Service Catalog](/products/kong-konnect/features/api-service-catalog)
          Service Catalog
        • [
          MCP Registry](/products/mcp-registry)
          MCP Registry
        • GOVERN APIs
        • [
          Metering and Billing](/products/kong-konnect/features/usage-based-metering-and-billing)
          Metering and Billing
        • [
          APIOps and Automation](/products/apiops-automation)
          APIOps and Automation
        • [
          API Observability](/products/kong-konnect/features/api-observability)
          API Observability
        • [Why Kong?](/company/why-kong)Why Kong?
      • CLOUD
      • [Cloud API Gateways](/products/kong-konnect/features/dedicated-cloud-gateways)Cloud API Gateways
      • [Need a self-hosted or hybrid option?](/products/kong-enterprise)Need a self-hosted or hybrid option?
      • COMPARE
      • [Considering AI Gateway alternatives? ](/performance-comparison/ai-gateway-alternatives)Considering AI Gateway alternatives?
      • [Kong vs. Postman](/performance-comparison/kong-vs-postman)Kong vs. Postman
      • [Kong vs. MuleSoft](/performance-comparison/kong-vs-mulesoft)Kong vs. MuleSoft
      • [Kong vs. Apigee](/performance-comparison/kong-vs-apigee)Kong vs. Apigee
      • [Kong vs. IBM](/performance-comparison/ibm-api-connect-vs-kong)Kong vs. IBM
      • GET STARTED
      • [Sign Up for Kong Konnect](/products/kong-konnect/register)Sign Up for Kong Konnect
      • [Documentation](https://developer.konghq.com/)Documentation
      • FOR PLATFORM TEAMS
      • [Developer Platform](/solutions/building-developer-platform)Developer Platform
      • [Kubernetes and Microservices](/solutions/build-on-kubernetes)Kubernetes and Microservices
      • [Observability](/solutions/observability)Observability
      • [Service Mesh Connectivity ](/solutions/service-mesh-connectivity)Service Mesh Connectivity
      • [Kafka Event Streaming](/solutions/kafka-stream-api-management)Kafka Event Streaming
      • FOR EXECUTIVES
      • [AI Connectivity](/ai-connectivity)AI Connectivity
      • [Open Banking](/solutions/open-banking)Open Banking
      • [Legacy Migration](/solutions/legacy-api-management-migration)Legacy Migration
      • [Platform Cost Reduction](/solutions/api-platform-consolidation)Platform Cost Reduction
      • [Kafka Cost Optimization](/solutions/reduce-kafka-cost)Kafka Cost Optimization
      • [API Monetization](/solutions/api-monetization)API Monetization
      • [AI Monetization](/solutions/ai-monetization)AI Monetization
      • [AI FinOps](/solutions/ai-cost-governance-finops)AI FinOps
      • FOR AI TEAMS
      • [AI Governance](/solutions/ai-governance)AI Governance
      • [AI Security](/solutions/ai-security)AI Security
      • [AI Cost Control](/solutions/ai-cost-optimization-management)AI Cost Control
      • [Agentic Infrastructure](/solutions/agentic-ai-workflows)Agentic Infrastructure
      • [MCP Production](/solutions/mcp-production-and-consumption)MCP Production
      • [MCP Traffic Gateway](/solutions/mcp-governance)MCP Traffic Gateway
      • FOR DEVELOPERS
      • [Mobile App API Development](/solutions/mobile-application-api-development)Mobile App API Development
      • [GenAI App Development](/solutions/power-openai-applications)GenAI App Development
      • [API Gateway for Istio](/solutions/istio-gateway)API Gateway for Istio
      • [Decentralized Load Balancing](/solutions/decentralized-load-balancing)Decentralized Load Balancing
      • BY INDUSTRY
      • [Financial Services](/solutions/financial-services-industry)Financial Services
      • [Healthcare](/solutions/healthcare)Healthcare
      • [Higher Education](/solutions/api-platform-for-education-services)Higher Education
      • [Insurance](/solutions/insurance)Insurance
      • [Manufacturing](/solutions/manufacturing)Manufacturing
      • [Retail](/solutions/retail)Retail
      • [Software & Technology](/solutions/software-and-technology)Software & Technology
      • [Transportation](/solutions/transportation-and-logistics)Transportation
      • [See all Solutions](/solutions)See all Solutions
  • [Pricing](/pricing)Pricing
      • DOCUMENTATION
      • [Kong Konnect](https://developer.konghq.com/konnect/)Kong Konnect
      • [Kong Gateway](https://developer.konghq.com/gateway/)Kong Gateway
      • [Kong Mesh](https://developer.konghq.com/mesh/)Kong Mesh
      • [Kong AI Gateway](https://developer.konghq.com/ai-gateway/)Kong AI Gateway
      • [Kong Event Gateway](https://developer.konghq.com/event-gateway/)Kong Event Gateway
      • [Kong Insomnia](https://developer.konghq.com/insomnia/)Kong Insomnia
      • [Plugin Hub](https://developer.konghq.com/plugins/)Plugin Hub
      • EXPLORE
      • [Blog](/blog)Blog
      • [Learning Center](/blog/learning-center)Learning Center
      • [eBooks](/resources/e-book)eBooks
      • [Reports](/resources/reports)Reports
      • [Demos](/resources/demos)Demos
      • [Customer Stories](/customer-stories)Customer Stories
      • [Videos](/resources/videos)Videos
      • EVENTS
      • [API + AI Summit](/events/conferences/api-ai-summit)API + AI Summit
      • [Agentic Era World Tour](/agentic-era-world-tour)Agentic Era World Tour
      • [Webinars](/events/webinars)Webinars
      • [User Calls](/events/user-calls)User Calls
      • [Workshops](/events/workshops)Workshops
      • [Meetups](/events/meetups)Meetups
      • [See All Events](/events)See All Events
      • FOR DEVELOPERS
      • [Get Started](https://developer.konghq.com/)Get Started
      • [Community](/community)Community
      • [Certification](/academy/certification)Certification
      • [Training](https://education.konghq.com)Training
      • COMPANY
      • [About Us](/company/about-us)About Us
      • [We're Hiring!](/company/careers)We're Hiring!
      • [Press Room](/company/press-room)Press Room
      • [Contact Us](/company/contact-us)Contact Us
      • [Kong Partner Program](/partners)Kong Partner Program
      • [Enterprise Support Portal](https://support.konghq.com/s/)Enterprise Support Portal
      • [Documentation](https://developer.konghq.com/?_gl=1*tphanb*_gcl_au*MTcxNTQ5NjQ0MC4xNzY5Nzg4MDY0LjIwMTI3NzEwOTEuMTc3MzMxODI2MS4xNzczMzE4MjYw*_ga*NDIwMDU4MTU3LjE3Njk3ODgwNjQ.*_ga_4JK9146J1H*czE3NzQwMjg1MjkkbzE4OSRnMCR0MTc3NDAyODUyOSRqNjAkbDAkaDA)Documentation
  • [](/search)
  • [Login](https://cloud.konghq.com/login)Login
  • [Book Demo](/contact-sales)Book Demo
  • [Get Started](/products/kong-konnect/register)Get Started
[Blog](/blog)Blog
  • [AI Gateway](/blog/tag/ai-gateway)AI Gateway
  • [AI Security](/blog/tag/ai-security)AI Security
  • [AIOps](/blog/tag/aiops)AIOps
  • [API Security](/blog/tag/api-security)API Security
  • [API Gateway](/blog/tag/api-gateway)API Gateway
|
    • [API Management](/blog/tag/api-management)API Management
    • [API Development](/blog/tag/api-development)API Development
    • [API Design](/blog/tag/api-design)API Design
    • [Automation](/blog/tag/automation)Automation
    • [Service Mesh](/blog/tag/service-mesh)Service Mesh
    • [Insomnia](/blog/tag/insomnia)Insomnia
    • [Event Gateway](/blog/tag/event-gateway)Event Gateway
    • [View All Blogs](/blog/page/1)View All Blogs
We're Entering the Age of AI Connectivity [Read more](/blog/news/the-age-of-ai-connectivity)Read moreProducts & Agents:
    • [Kong AI Gateway](/products/kong-ai-gateway)Kong AI Gateway
    • [Kong API Gateway](/products/kong-gateway)Kong API Gateway
    • [Kong Event Gateway](/products/event-gateway)Kong Event Gateway
    • [Kong Metering & Billing](/products/usage-based-metering-and-billing)Kong Metering & Billing
    • [Kong Insomnia](/products/kong-insomnia)Kong Insomnia
    • [Kong Konnect](/products/kong-konnect)Kong Konnect
  • [Documentation](https://developer.konghq.com)Documentation
  • [Book Demo](/contact-sales)Book Demo
  1. Home
  2. Blog
  3. Product Releases
  4. Kong AI Gateway 3.11: Reduce Token Spend, Unlock Multimodal Innovation
[Product Releases](/blog/product-releases)Product Releases
July 9, 2025
5 min read

# Kong AI Gateway 3.11: Reduce Token Spend, Unlock Multimodal Innovation

Marco Palladino
CTO and Co-Founder of Kong

## New Multimodal Capabilities, New AI Prompt Compression, Integration with AWS Bedrock Guardrails, and More

Today, I'm excited to announce one of our largest Kong AI Gateway releases (3.11), which ships with several new features critical in building modern and reliable AI agents in production. We strongly recommend updating to this version to get access to the latest and greatest that AI infrastructure has to offer.

The full change log can be found [here](https://developer.konghq.com/gateway/changelog/)here.

## Introducing 10+ GenAI capabilities, including multimodal endpoints

This new release of Kong AI Gateway is quite significant in the vastness of new GenAI capabilities that we're supporting out of the box.

_Azure | OpenAI_

## Batch, Assistants, and Files:

  • - **Batch** enables efficient parallel execution of multiple LLM calls, reducing latency and cost at scale.
  • - **Assistants** simplify orchestration of multistep AI workflows, enabling developers to build stateful, tool-augmented agents with memory.
  • - **Files** provide persistent storage for documents and context, allowing richer, more informed interactions with LLMs across sessions.
_Azure | OpenAI_

## Audio Transcription, Translation, and Speech API:

  • - **Speech-to-text:** Transcribe audio input to text for call summarization, voice agents, and meeting analysis.
  • - **Real-time translation:** Convert spoken input across languages, enabling multilingual voice interfaces.
  • - **Text-to-speech:** Synthesize natural-sounding audio from LLM responses to power voice-based agents.
_Azure | OpenAI | Gemini | AWS Bedrock_

## Image Generation and Edits API:

  • - **Image generation**: Generate images from text prompts for creative, marketing, and design applications.
  • - **Image editing:** Modify existing images using instructions and masks, useful for dynamic content workflows.
  • - **Multimodal agents:** Equip agents with visual input/output capabilities to enhance UX and task range.
_Azure | OpenAI_

## Realtime API:

  • - **Streaming completions**: Stream token-by-token output for fast, interactive user experiences.
  • - **Low latency:** Reduce time-to-first-token and improve perceived responsiveness in chat UIs.
  • - **Analytics**: Monitor streaming behavior and performance metrics.
_Azure | OpenAI_

## Responses API: Enhanced response introspection

  • - **Response metadata**: Access logprobs, function calls, and tool usage for each LLM output.
  • - **Debugging and evaluation:** Enable advanced observability and response-level quality checks.
  • - **Control and tuning**: Use metadata to build reranking, retries, or hybrid generation strategies.
_AWS Bedrock | Cohere_

## Rerank API:

  • - **Contextual reranking**: Improve relevance of retrieved documents and results in RAG pipelines.
  • - **Flexible inputs**: Send any list of candidates to be re-ordered based on prompt context.
  • - **Improved accuracy**: Boost final LLM response quality through better grounding.
_AWS Bedrock_

## AWS Bedrock Agent APIs:

  • - **Converse / ConverseStream**: Execute step-by-step agent plans with or without streaming for advanced orchestration.
  • - **RetrieveAndGenerate**: Combine retrieval with generation in one API call for simplified RAG.
  • - **RetrieveAndGenerateStream**: Stream RAG results for real-time agent experiences.
_Hugging Face_

## Generate and Generate_Stream API:

  • - **Generate**: Use open-source models for text generation across tasks and industries.
  • - **Generate Stream**: Stream text outputs in real-time for chat and live inference use cases.
  • - **Open model ecosystem**: Leverage the flexibility of Hugging Face’s vast library of models.
_Azure | OpenAI | Gemini | AWS Bedrock | Mistral | Cohere_

## Embeddings API:

  • - **Text-to-embedding conversion**: Transform text into vector representations for semantic search, clustering, recommendations, and RAG.
  • - **Multivendor support**: Use OpenAI, Azure, Cohere, Mistral, Gemini, and Bedrock embeddings with a unified interface, including all OpenAI-compatible models.
  • - **Analytics**: Track token usage, similarity scoring, and latency metrics for observability.

## Introducing a new prompt compression plugin

With generative AI applications becoming more pervasive, the volume of requests to LLMs increases, and costs rise in proportion. As with any cost to our business, we must look for efficiency savings. LLM costs are typically based on token usage — the longer the prompt, the more tokens are consumed per request. Prompts will often contain padding or redundant words or phrases that can be removed or shortened while retaining the semantic intent of the request.

Youtube thumbnail
**This content contains a video which can not be displayed in Agent mode**

Effectively, we have halved the token count; you can control the level of compression or target token count. Our testing has shown that this approach can achieve up to 5x cost reduction, while keeping 80% of the intended semantic meaning of the original prompt.

Take a look at the [docs for more examples](https://developer.konghq.com/how-to/compress-llm-prompts)docs for more examples.

In real-world usage, prompts are much larger and are made even more so by automatic context injection — whether that be system prompts or i[njecting Retrieval Augmented Generation (RAG) context](https://konghq.com/blog/engineering/build-your-own-internal-rag-agent)njecting Retrieval Augmented Generation (RAG) context. This additional context can also be compressed. In fact, our testing has shown that compressing the context while retaining the original prompt fidelity can provide an optimal balance between cost reduction and intent retention.

This complements other cost-saving measures already available in Kong, such as Semantic Caching, which avoids hitting the LLM service when a similar request has already been answered, and AI Rate Limiting, which can set time-based token or cost limits per application, team, or user.

## Introducing AWS Bedrock Guardrails support

It is well understood that generative AI applications can sometimes produce unpredictable outputs – confidence in applications can quickly be eroded by a few missteps. You need to be able to keep your AI-driven applications “on topic”, block profanity or other undesirable language, redact personally identifiable information, and reduce hallucinations. You need guardrails.

Today, with Kong AI Gateway, you can already implement policies that can redact PII data with our built-in [PII Sanitizer](https://docs.konghq.com/hub/kong-inc/ai-sanitizer/)PII Sanitizer and [Semantic Prompt Guard](https://docs.konghq.com/hub/kong-inc/ai-semantic-prompt-guard/)Semantic Prompt Guard plugins. We also support policies that enable you to use [Azure AI Content Safety](https://docs.konghq.com/hub/kong-inc/ai-azure-content-safety/)Azure AI Content Safety to reach out to Azure’s managed guardrails service.

Today, we're announcing support for[ AWS Bedrock Guardrails](https://developer.konghq.com/plugins/ai-aws-guardrails/) AWS Bedrock Guardrails to help safeguard your AI applications from a wide range of both malicious and unintended consequences. You can find [more examples in the docs](https://developer.konghq.com/how-to/use-ai-aws-guardrails-plugin)more examples in the docs.

As a product owner with Kong AI Gateway, you can continue to monitor applications and provide incremental improvements in quality, and react immediately by adjusting policies without any changes to your application code. Kong AI Gateway helps you keep risks in check and increase confidence in the rollout of AI-driven applications and innovation.

## Visualize your AI traffic with the new AI Manager

We also recently introduced a new AI Manager in Konnect, enabling you to easily expose LLMs for consumption by your AI agents, and additionally govern, secure, and observe LLM traffic using a brand-new user interface straight from your browser. 

With AI Manager you can:

  • - **Manage AI policies via Konnect**: Govern, secure, accelerate, and observe AI traffic in a self-managed — or fully managed — AI infrastructure that's easy to deploy.
  • - **Curate your LLM catalog**: See what LLMs are available for consumption by AI agents and applications, with custom tiers of access and governance controls.
  • - **Visualize the agentic map**: Observe at any given time what agents are consuming the LLMs you've decided to expose to the organization.
  • - **Observe LLM analytics**: Measure token, cost, and request consumption with custom dashboards and insights for fine-grained understanding of your AI traffic.

Read more about the new AI manager [here](https://konghq.com/blog/product-releases/kong-ai-manager)here. 

## Get started with Kong AI Gateway today

Ready to try out the new release of Kong AI Gateway? You can [get started for FREE with Konnect Plus](https://konghq.com/products/kong-konnect/register)get started for FREE with Konnect Plus. If you already have a Konnect account, visit the [official product page](https://konghq.com/products/kong-ai-gateway)official product page or dive straight into the [demos and tutorials](https://developer.konghq.com/ai-gateway/)demos and tutorials. 

Want to learn more about moving past the AI experimentation phase and into production-ready AI systems? Check out this webinar on how to [drive real AI value with state-of-the-art AI infrastructure](https://konghq.com/events/webinars/state-of-the-art-ai-infrastructure)drive real AI value with state-of-the-art AI infrastructure.

- [AI Gateway](/blog/tag/ai-gateway)AI Gateway- [AWS](/blog/tag/aws)AWS- [LLM](/blog/tag/llm)LLM

## More on this topic

_Videos_

## API Cost Management in the Age of LLMs

_Reports_

## Agentic AI in the Enterprise: Adoption, Governance, and Barriers

## See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

[Get a Demo](/contact-sales)Get a Demo
**Topics**
- [AI Gateway](/blog/tag/ai-gateway)AI Gateway- [AWS](/blog/tag/aws)AWS- [LLM](/blog/tag/llm)LLM
Marco Palladino
CTO and Co-Founder of Kong

Recommended posts

# Announcing Kong AI Gateway 3.8 With Semantic Caching and Security, 6 New LLM Load-Balancing Algorithms, and More LLMs

[Product Releases](/blog)Product ReleasesSeptember 11, 2024

Today at API Summit , we're introducing one of the biggest new releases of our AI Gateway technology : a new class of intelligent semantic plugins, new advanced load balancing capabilities for LLMs, and the official support for AWS Bedrock and GCP

Marco Palladino
[](https://konghq.com/blog/product-releases/ai-gateway-3-8)

# Kong Simplifies Multicloud Cloud Gateways with Managed Redis Cache

[Product Releases](/blog)Product ReleasesMarch 12, 2026

Managed Redis cache is a turnkey "Shared State" add-on for Kong Dedicated Cloud Gateways. It is designed to combine the performance of an in-memory data store with the simplicity of a SaaS product. When you spin up a Dedicated Cloud Gateway in Kong

Amit Shah
[](https://konghq.com/blog/product-releases/multicloud-cloud-gateways-managed-redis-cache)

# Move More Agentic Workloads to Production with AI Gateway 3.13

[Product Releases](/blog)Product ReleasesDecember 18, 2025

MCP ACLs, Claude Code Support, and New Guardrails New providers, smarter routing, stronger guardrails — because AI infrastructure should be as robust as APIs We know that successful AI connectivity programs often start with an intense focus on how

Greg Peranich
[](https://konghq.com/blog/product-releases/ai-gateway-3-13)

# LLM Cost Management: How to Implement AI Showback and Chargeback

[Enterprise](/blog)EnterpriseApril 6, 2026

Bring Financial Accountability to Enterprise LLM Usage with Konnect Metering and Billing Showback and chargeback are not the same thing. Most organizations conflate these two concepts, and that conflation delays action. Understanding the LLM showb

Alex Drag
[](https://konghq.com/blog/enterprise/llm-cost-management-ai-showback-and-chargeback)

# AI Input vs. Output: Why Token Direction Matters for AI Cost Management

[Enterprise](/blog)EnterpriseMarch 10, 2026

The Shifting Economic Landscape: The AI token economy in 2026 is evolving, and enterprise leaders must distinguish between low-cost input tokens and high-premium output tokens to maintain profitability. Agentic AI Financial Risks: The transition t

Dan Temkin
[](https://konghq.com/blog/enterprise/ai-input-vs-output-cost-management)

# Make MCP Production-Ready: Introducing Kong’s Enterprise MCP Gateway

[Product Releases](/blog)Product ReleasesOctober 14, 2025

What does the solution space look like so far? The solution landscape is complicated by the fact that MCP is still finding its footing, and there are many various OSS projects and vendors that are rapidly shipping “MCP support” in an attempt to take

Alex Drag
[](https://konghq.com/blog/product-releases/enterprise-mcp-gateway)

# Kong Konnect: Introducing HashiCorp Vault Support for LLMs

[Product Releases](/blog)Product ReleasesJune 26, 2025

If you're a builder, you likely keep sending your LLM credentials on every request from your agents and applications. But if you operate in an enterprise environment, you'll want to store your credentials in a secure third-party like HashiCorp Vault

Marco Palladino
[](https://konghq.com/blog/product-releases/hashicorp-vault-support-for-llms)

## Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

[Get a Demo](/contact-sales)Get a Demo
Ask AI for a summary of Kong
  • [](https://chatgpt.com/s/t_69b981cfa37081919ce25ce107c431c1)
  • [](https://share.google/aimode/hyefOiNwl8pg8W99d)
  • [](https://www.perplexity.ai/search/what-solutions-does-kong-offer-VsYWPddxQjajgvLA4B9hjQ)
Stay connected

## step-0

    • Company
    • [About Kong](/company/about-us)About Kong
    • [Customers](/customer-stories)Customers
    • [Careers](/company/careers)Careers
    • [Press](/company/press-room)Press
    • [Events](/events)Events
    • [Contact](/company/contact-us)Contact
    • [Pricing](/pricing)Pricing
    • Legal
    • [Terms](/legal/terms-of-use)Terms
    • [Privacy](/legal/privacy-policy)Privacy
    • [Trust and Compliance](https://trust.konghq.com)Trust and Compliance
    • Platform
    • [Kong AI Gateway](/products/kong-ai-gateway)Kong AI Gateway
    • [Kong Konnect](/products/kong-konnect)Kong Konnect
    • [Kong Gateway](/products/kong-gateway)Kong Gateway
    • [Kong Event Gateway](/products/event-gateway)Kong Event Gateway
    • [Kong Insomnia](/products/kong-insomnia)Kong Insomnia
    • [Documentation](https://developer.konghq.com)Documentation
    • [Book Demo](/contact-sales)Book Demo
    • Compare
    • [AI Gateway Alternatives](/performance-comparison/ai-gateway-alternatives)AI Gateway Alternatives
    • [Kong vs Apigee](/performance-comparison/kong-vs-apigee)Kong vs Apigee
    • [Kong vs IBM](/performance-comparison/ibm-api-connect-vs-kong)Kong vs IBM
    • [Kong vs Postman](/performance-comparison/kong-vs-postman)Kong vs Postman
    • [Kong vs Mulesoft](/performance-comparison/kong-vs-mulesoft)Kong vs Mulesoft
    • Explore More
    • [Open Banking API Solutions](/solutions/open-banking)Open Banking API Solutions
    • [API Governance Solutions](/solutions/api-governance)API Governance Solutions
    • [Istio API Gateway Integration](/solutions/istio-gateway)Istio API Gateway Integration
    • [Kubernetes API Management](/solutions/build-on-kubernetes)Kubernetes API Management
    • [API Gateway: Build vs Buy](/campaign/secure-api-scalability)API Gateway: Build vs Buy
    • [Kong vs Apigee](/performance-comparison/kong-vs-apigee)Kong vs Apigee
    • Open Source
    • [Kong Gateway](https://developer.konghq.com/gateway/install/)Kong Gateway
    • [Kuma](https://kuma.io/)Kuma
    • [Insomnia](https://insomnia.rest/)Insomnia
    • [Kong Community](/community)Kong Community

Kong enables the connectivity layer for the agentic era – securely connecting, governing, and monetizing APIs and AI tokens across any model or cloud.

  • Japanese
  • Frenchcoming soon
  • Spanishcoming soon
  • Germancoming soon
© Kong Inc. 2026
Interaction mode