# Move More Agentic Workloads to Production with AI Gateway 3.13
**MCP ACLs, Claude Code Support, and New Guardrails**
Greg Peranich
Staff Product Manager, Kong
Kong AI Gateway 3.13 moves enterprises from AI experimentation to shipping production-grade agents by unlocking new capabilities focused on agentic security, developer productivity, and resilience, including MCP tool-level access control, expanded provider support, and smarter load balancing.
## New providers, smarter routing, stronger guardrails — because AI infrastructure should be as robust as APIs
Today, we're proud to announce one of the most significant updates to Kong AI Gateway yet. This release is all about helping you move from “we’re experimenting with LLM and MCP-powered workflows” to “let’s ship our first production-grade agent to market,” with a special focus on the hard problems to solve around agentic security, dev productivity, and resilience.
Kong AI Gateway 3.13 unlocks a wave of new capabilities that power enterprise-grade AI infrastructures — from MCP tool-level access control and circuit breakers to expanded provider support, advanced generation workflows, and load balancing improvements. If you’re building multi-model, agentic, real-time, or mission-critical AI systems, you’ll want to upgrade.
## 🔐 MCP ACLs: Fine-grained tool and access control
With the rise of agent architectures and multi-tool workflows, controlling what tools an agent (or user) can call — and when — is essential. Since the rise of MCP, organizations (and often different teams within organizations) have approached this problem in different, often non-standardized ways. Kong AI Gateway solves for this, giving AI and Infra teams the ability to standardize how their org:
- **Filters MCP tools**: Restrict which tools or capabilities are exposed to an agent or consumer by default.
- **Enforces access control at the MCP layer**: Use ACLs to enforce least-privilege access across MCP servers.
- **Dynamically manages ACL configuration**: ACLs can be managed via declarative config (e.g., decK / Terraform) or via the control plane.
Let’s consider exposing an MCP interface for fictitious airline KongAir's API. In the example configuration below, we can ensure that only users in the `booking-agents` consumer group will be allowed to consume KongAir MCP tools by default. Users that have been mapped to the `developers` consumer group will only be permitted access to the tools that allow flight details to be fetched.
plugins:- name: ai-mcp-proxy
config: mode: conversion-listener
logging: log_payloads:true log_statistics:true consumer_identifier: username
default_acl: - allow: - booking-agents
deny: - developers
scope: tools
include_consumer_groups:true tools: - description: Get KongAir planned flights
annotations: title: Get KongAir planned flights
method: GET
path: /flights
parameters: - name: date
in: query
description: Filter by date (defaults to current day)
required:false schema: type: string
acl: allow: - developers
- description: Get a specific flight by flight number
annotations: title: Get a specific flight by flight number
method: GET
path: /flights/{flightNumber} parameters: - name: flightNumber
in: path
description: The flight number
required:true schema: type: string
acl: allow: - developers
- description: Fetch more details about a flight
annotations: title: Fetch more details about a flight
method: GET
path: /flights/{flightNumber}/details
parameters: - name: flightNumber
in: path
description: The flight number
required:true schema: type: string
acl: allow: - developers
- description: Book a flight
annotations: title: Book a flight
method: POST
path: /flights/{flightNumber}/bookings
parameters: - name: flightNumber
in: path
description: The flight number to book
required:true schema: type: string
request_body: required:true content: application/json: schema: type: object
properties: passenger_name: type: string
passenger_email: type: string
format: email
seat_preference: type: string
enum:[window, aisle, middle] required: - passenger_name
- passenger_email
- description: Delete a flight booking
annotations: title: Delete a flight booking
method: DELETE
path: /bookings/{bookingId} parameters: - name: bookingId
in: path
description: The booking ID to delete
required:true schema: type: string
server: timeout:60000
These features help you ensure that AI traffic through the Gateway remains secure, governed, and compliant — even in highly dynamic, multi-agent environments.
**This content contains a video which can not be displayed in Agent mode**
## 🌍 Expanded provider ecosystem
Say goodbye to lock-in. This release adds first-class support for several new providers:
- **Anthropic (native SDK support)** — optimized for code generation, developer tooling, and agent-driven code workflows.
- **xAI / Grok **— enabling reasoning workloads and inference use-cases at scale.
- **Aliyun (Alibaba Cloud / Qwen) **— ideal for deployments in Asia-Pacific with compliance and low-latency needs.
With these additions, Kong AI Gateway becomes a truly multi-cloud, multi-model AI platform — giving teams flexibility to choose providers based on cost, compliance, performance, or capability.
## 🔨 Claude Code Support
We're excited to empower organizations to govern a tool that nearly every developer uses, Anthropic’s Claude Code. With one line of configuration, access to Claude can be secured, governed, and observed end-to-end–finally adding that governance layer to Claude-powered dev productivity.
AI infrastructure isn’t just about performance and scale — safety and compliance matter. That’s why this release brings built-in integration with Lakera.ai guardrails. With AI Gateway 3.13, you can:
- Enforce content safety, block unsafe or toxic generations, and prevent leakage of PII or sensitive data.
- Apply guardrails uniformly, regardless of the underlying model provider.
- Seamless integration: no code changes required — policies enforced at the gateway layer.
Modern AI workflows are complex. Think: batch inference, file-based generation, multi-modal inputs (text, image, audio), streaming outputs, and long-lived agent loops. This release expands support for:
- **Batch operations** — efficient parallel inference for large workloads
- **File-based generation / file uploads** — supporting document processing, ingestion, and rich context workflows across every provider that offers file-based operations.
All providers (old and new) that support these advanced text generation patterns (old and new) can now benefit from a unified, consistent interface — reducing complexity and speeding up development.
With these enhancements, Kong AI Gateway strengthens its role as a purpose-built load-balancing and traffic-management layer for GenAI and agent workloads.
## ⚡ Why this matters
AI systems have moved beyond single-model, one-off requests. What organizations need now is infrastructure that handles:
- Multi-model strategies (vendor diversification, specialization per provider)
- Agentic workflows with complex tool interactions
- Real-time, streaming, multi-modal generation at scale
- Safety, compliance, and governance — without slowing down innovation
- Resilience, failover, and performance under load
This release empowers you to build systems that meet — and exceed — those needs.
Kong Agent Gateway Is Here — And It Completes the AI Data Path
Kong Agent Gateway is a new capability within Kong AI Gateway that extends our platform to more robustly cover agent-to-agent (A2A) communication. With this release, Kong AI Gateway n
Bring Financial Accountability to Enterprise LLM Usage with Konnect Metering and Billing
Showback and chargeback are not the same thing. Most organizations conflate these two concepts, and that conflation delays action. Understanding the LLM showb
The Shifting Economic Landscape: The AI token economy in 2026 is evolving, and enterprise leaders must distinguish between low-cost input tokens and high-premium output tokens to maintain profitability. Agentic AI Financial Risks: The transition t
What does the solution space look like so far? The solution landscape is complicated by the fact that MCP is still finding its footing, and there are many various OSS projects and vendors that are rapidly shipping “MCP support” in an attempt to take
Today, we're open-sourcing Volcano SDK, a TypeScript SDK for building AI agents that combines LLM reasoning with real-world actions through MCP tools. Why Volcano SDK? One reason: because 9 lines of code are faster to write and easier to manage tha
If you're a builder, you likely keep sending your LLM credentials on every request from your agents and applications. But if you operate in an enterprise environment, you'll want to store your credentials in a secure third-party like HashiCorp Vault
Today, we're excited to announce the general availability of AI Manager in Kong Konnect, the platform to manage all of your API, AI, and event connectivity across all modern digital applications and AI agents. Kong already provides the fastest and m