Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Building Secure AI Agents with Kong's MCP Proxy and Volcano SDK
Engineering
January 27, 2026
5 min read

Building Secure AI Agents with Kong's MCP Proxy and Volcano SDK

Eugene Tan
Solutions Engineer (APAC), Kong

Modern AI applications are no longer just about sending prompts to an LLM and returning text. As soon as AI systems need to interact with real business data, internal APIs, or operational workflows, the problem becomes one of orchestration, security, and control. The challenge is to build secure AI agents without embedding fragile logic or exposing sensitive systems directly to a model.

This is where a layered architecture using Volcano SDK, DataKit, and Kong MCP Proxy becomes compelling.

In this architecture, the Model Context Protocol (MCP) serves as the universal language for AI tools, decoupling the model from the backend implementation. Unlike direct API calls, MCP provides a standardized way to expose data and functionality, making it easier to secure and audit.

A simple agent, by design

The example below shows how an AI agent can be built using Volcano SDK with minimal code, while still interacting with backend services in a controlled way.

The agent is created by first configuring an LLM, then defining an MCP (Model Context Protocol) endpoint that represents a promotions service. The agent executes in clear, sequential steps. Context is automatically preserved between steps, and tool selection is handled by the framework rather than manual wiring. From the developer’s point of view, the code describes what the agent is trying to do, not how to manage every interaction.

Volcano SDK vs. traditional frameworks

Unlike complex frameworks like LangChain that often require verbose chain definitions, Volcano SDK focuses on intent-based definitions. For example, where other frameworks might require manual state management for a multi-turn conversation, Volcano SDK automatically handles the context window, allowing developers to focus purely on business logic.

Youtube thumbnail

Why building secure agents with Volcano SDK is simple

Volcano SDK is designed to remove friction from agent development. Instead of writing custom function-calling logic, maintaining large prompt templates, or manually passing context between calls, developers define intent through simple chained steps. The SDK manages state, tool invocation, and execution flow automatically.

This makes agents easier to reason about and safer to evolve over time. As requirements change, steps can be added or adjusted without rewriting the core logic. Most importantly, the SDK keeps application code clean by pushing complexity into a well-defined agent runtime rather than scattering it across services.

Orchestrating business APIs with DataKit and MCP

While Volcano SDK focuses on agent behavior, DataKit focuses on data and API orchestration. MCP endpoints exposed to agents are powered by DataKit, which acts as a controlled aggregation layer in front of backend systems.

MCP vs. direct API calls

A common question developers ask is: "Why use MCP instead of allowing the LLM to call APIs directly?" The answer lies in security and decoupling. Direct API access (like OpenAI function calling against raw endpoints) exposes your internal schema to the model. By using DataKit to expose resources via MCP, you create a sanitized "view" of your data specifically for the AI, preventing it from hallucinating parameters that could crash internal services.

Instead of allowing an LLM to interact directly with databases or internal APIs, DataKit exposes task-specific, curated interfaces. These interfaces can combine multiple services, apply business rules, enforce schemas, and return only the data that is appropriate for an agent to see.

This separation is critical. It ensures that business logic remains deterministic and auditable, while agents remain consumers of governed capabilities rather than free-form explorers of internal systems.

Securing MCP traffic with Kong AI Gateway

Sitting in front of DataKit is the Kong AI Gateway with MCP Proxy, which brings enterprise-grade governance to agent workflows. This allows Kong to encapsulate any existing API, or in this case, the Datakit workflow, as an MCP server without additional code. The MCP Proxy can also simply proxy to existing MCP servers. This allows Kong to then enforce authentication, authorization, rate limits, and observability for MCP traffic in the same way it does for traditional APIs. 

Specifically, this gateway layer solves several critical security challenges:

  1. Rate Limiting for Agents: Prevent an agent from running up costs or overwhelming services by enforcing token-based or request-based limits.
  2. Authentication: Ensure that only authorized agents can access specific MCP tools, regardless of what the LLM "wants" to do.
  3. Observability: Log every tool call and data access attempt for compliance auditing.

This allows organisations to scale their agentic use cases, without necessarily scaling the effort of development and total cost of ownership as the AI gateway takes care of encapsulating existing APIs as MCP servers without additional development effort.

Crucially, Kong AI Gateway  controls which MCP tools an agent can invoke and under what conditions. Even if an LLM attempts prompt injection or tries to escalate its privileges, it cannot access capabilities that are not explicitly exposed through MCP endpoints. Security is enforced at the protocol and gateway level, not by trusting the model to behave correctly.

This turns MCP into a safe execution boundary rather than a soft guideline.

The end-to-end secure AI agent architecture

Together, these components form a clean and composable architecture. Volcano SDK defines agent intent and execution flow. DataKit orchestrates and governs backend APIs. Kong MCP Proxy enforces security, policy, and visibility. The LLM operates within tightly controlled boundaries, consuming only the capabilities it has been explicitly granted.

Each layer has a single responsibility, and none of them leak unnecessary complexity into application code.

Building Secure AI Agents: Key Takeaways 

This architecture allows teams to move quickly without sacrificing safety. Developers can build and iterate on agents rapidly. Platform teams retain control over data access and policy enforcement. Security teams gain visibility and confidence that AI systems cannot bypass controls through clever prompting.

Here’s a quickstart you can try out here in this GitHub link, that does the entire flow of exposing an api orchestration workflow as an MCP server without additional code via Kong, to securing the MCP and LLM interactions with semantic guardrails. 

Rather than embedding AI logic everywhere, this approach creates a platform for AI—one that is scalable, auditable, and ready for real-world production use.

Ready to see this architecture in action? Book a demo today to discover how Kong’s MCP Proxy and Volcano SDK can help you deploy secure, production-grade AI agents in minutes.

Frequently Asked Questions (FAQ)

How does Kong AI Gateway secure LLM tool calls?

Kong AI Gateway secures tool calls by acting as a proxy between the LLM and your backend services (exposed via MCP). It enforces authentication, authorization, and rate limits on every request. Additionally, it can apply semantic guardrails to detect and block prompt injection attacks before they reach your internal APIs.

What is the difference between MCP and OpenAI function calling?

OpenAI function calling is a mechanism for the model to request data, but it often requires direct connectivity to your APIs. Model Context Protocol (MCP) is a standardized interface that decouples the model from the backend. When combined with DataKit and Kong, MCP ensures the model only interacts with a sanitized, governed "view" of your data, rather than raw API endpoints.

Can I use existing APIs with the Model Context Protocol?

Yes. A key benefit of using Kong AI Gateway with MCP Proxy is the ability to encapsulate existing REST or GraphQL APIs as MCP servers without writing additional code. This allows you to scale AI agent workflows using your current infrastructure while maintaining security policies.

How do I prevent prompt injection in enterprise AI agents?

Preventing prompt injection requires a layered approach. By using the Kong AI Gateway, you can enforce semantic validation on both incoming prompts and outgoing tool calls. If an LLM attempts to output a command that violates your security policy (e.g., deleting data), the Gateway blocks the request at the protocol level.

Does Volcano SDK replace LangChain?

Volcano SDK is an alternative to frameworks like LangChain, focusing specifically on simplicity and secure agent orchestration. While LangChain offers a broad ecosystem for experimentation, Volcano SDK is designed for building deterministic, production-grade agents with built-in state management and clear separation of concerns.

Unleash the power of APIs with Kong Konnect

Learn MoreGet a Demo
Agentic AIAI GatewayMCPAPI Gateway

Table of Contents

  • A simple agent, by design
  • Why building secure agents with Volcano SDK is simple
  • Orchestrating business APIs with DataKit and MCP
  • Securing MCP traffic with Kong AI Gateway
  • The end-to-end secure AI agent architecture
  • Frequently Asked Questions (FAQ)

More on this topic

Videos

MCP vs OpenAPI vs A2A vs ?: Preparing for the Agentic World

Videos

Context‑Aware LLM Traffic Management with RAG and AI Gateway

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
Agentic AIAI GatewayMCPAPI Gateway
Share on Social
Eugene Tan
Solutions Engineer (APAC), Kong

Eugene is passionate about all things regarding cloud connectivity and platform engineering. He works closely with customers to build next-generation platforms - powered by Kong. Prior to Kong, Eugene held Solution Architect roles at companies such as MongoDB, Accenture and have been a practitioner  at the Bank of Singapore.

Recommended posts

AI Agent with Strands SDK, Kong AI/MCP Gateway & Amazon Bedrock

EngineeringJanuary 12, 2026

In one of our posts, Kong AI/MCP Gateway and Kong MCP Server technical breakdown, we described the new capabilities added to Kong AI Gateway to support MCP (Model Context Protocol). The post focused exclusively on consuming MCP server and MCP tool

Jason Matis

What is a MCP Gateway? The Missing Piece for Enterprise AI Infrastructure

Learning CenterJanuary 21, 2026

AI agents are spreading across organizations rapidly. Each agent needs secure access to different Model Context Protocol (MCP) servers. Authentication becomes complex. Scaling creates bottlenecks. The dreaded "too many endpoints" problem emerges.

Kong

Building the Agentic AI Developer Platform: A 5-Pillar Framework

EnterpriseJanuary 15, 2026

The first pillar is enablement. Developers need tools that reduce friction when building AI-powered applications and agents. This means providing: Native MCP support for connecting agents to enterprise tools and data sources SDKs and frameworks op

Alex Drag

Introducing MCP Tool ACLs: Fine-Grained Authorization for AI Agent Tools

Product ReleasesJanuary 14, 2026

The evolution of AI agents and autonomous systems has created new challenges for enterprise organizations. While securing API endpoints is well-understood, controlling access to individual AI agent tools presents a unique authorization problem. Toda

Michael Field

Move More Agentic Workloads to Production with AI Gateway 3.13

Product ReleasesDecember 18, 2025

MCP ACLs, Claude Code Support, and New Guardrails New providers, smarter routing, stronger guardrails — because AI infrastructure should be as robust as APIs We know that successful AI connectivity programs often start with an intense focus on how

Greg Peranich

The AI Governance Wake-Up Call

EnterpriseDecember 12, 2025

Companies are charging headfirst into AI, with research around agentic AI in the enterprise finding as many as 9 out of 10 organizations are actively working to adopt AI agents.  LLMs are being deployed, agentic workflows are getting created left

Taylor Hendricks

How to Build a Single LLM AI Agent with Kong AI Gateway and LangGraph

EngineeringJuly 24, 2025

In my previous post, we discussed how we can implement a basic AI Agent with Kong AI Gateway. In part two of this series, we're going to review LangGraph fundamentals, rewrite the AI Agent and explore how Kong AI Gateway can be used to protect an LLM

Claudio Acquaviva

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026