Kong AI Gateway secures tool calls by acting as a proxy between the LLM and your backend services (exposed via MCP). It enforces authentication, authorization, and rate limits on every request. Additionally, it can apply semantic guardrails to detect and block prompt injection attacks before they reach your internal APIs.
What is the difference between MCP and OpenAI function calling?
OpenAI function calling is a mechanism for the model to request data, but it often requires direct connectivity to your APIs. Model Context Protocol (MCP) is a standardized interface that decouples the model from the backend. When combined with DataKit and Kong, MCP ensures the model only interacts with a sanitized, governed "view" of your data, rather than raw API endpoints.
Can I use existing APIs with the Model Context Protocol?
Yes. A key benefit of using Kong AI Gateway with MCP Proxy is the ability to encapsulate existing REST or GraphQL APIs as MCP servers without writing additional code. This allows you to scale AI agent workflows using your current infrastructure while maintaining security policies.
How do I prevent prompt injection in enterprise AI agents?
Preventing prompt injection requires a layered approach. By using the Kong AI Gateway, you can enforce semantic validation on both incoming prompts and outgoing tool calls. If an LLM attempts to output a command that violates your security policy (e.g., deleting data), the Gateway blocks the request at the protocol level.
Does Volcano SDK replace LangChain?
Volcano SDK is an alternative to frameworks like LangChain, focusing specifically on simplicity and secure agent orchestration. While LangChain offers a broad ecosystem for experimentation, Volcano SDK is designed for building deterministic, production-grade agents with built-in state management and clear separation of concerns.