For organizations managing dozens or hundreds of MCP servers across development and production environments, this level of visibility transforms MCP from an opaque black box into a fully instrumented, governable component of the AI infrastructure stack, which is essential for maintaining service-level objectives and optimizing both performance and cost at scale.
With this new level of MCP observability, organizations can continue to further centralize and standardize how they capture monitoring and observability metrics for their APIs, event streaming, and AI-powered applications. Of course, you can see these metrics and dashboards native in Konnect as well as in your enterprise-wide SIEM and observability tooling.
What’s next for Kong’s enterprise MCP Gateway?
Kong AI Gateway 3.12 marks just the beginning of our MCP Gateway journey. While we won’t reveal too much just yet, we’d like to share a glimpse into the product areas we’re actively developing for upcoming releases.
- Optimize MCP context and LLM costs: You’ll be able to leverage Kong AI Gateway’s underlying semantic intelligence to automate the selection and injection of tools based on specific prompts and agent needs. This will drastically increase LLM performance and reduce overall LLM costs associated with MCP usage.
- Curate tool collections for specific use cases: Group related MCP servers into domain-specific bundles (like "DevOps" with GitHub, Jira, and Jenkins) that can be exposed through dedicated gateway endpoints, enabling agents to access contextually relevant toolsets without manual server discovery.
- Centralize policy management at the server bundle level: Apply authentication policies and tool selection rules once per bundle rather than per server, dramatically simplifying governance while ensuring consistent security and access controls across all MCP servers within each collection.
Rest assured, there's much more coming, but we wanted to give you little insight into where we're going in the near future. If you’ve got any questions or requests for our MCP Gateway, please reach out to your Kong CSM or point of contact.
How can I get started with the Kong MCP Gateway?
Kong’s MCP Gateway is a part of our larger AI Gateway offering and is an enterprise-only solution that leverages paid plugins. You can use the MCP Gateway functionality in both Kong Gateway Enterprise for fully self-hosted deployments and in Kong Konnect for hybrid and cloud deployments where you also get the value of Konnect’s Developer Portal, Service Catalog, Advanced Analytics, and more.
If you want to try the new MCP Gateway, either reach out to your CSM or a known point of contact at Kong — or book a demo to explore an enterprise POC.
Not just the MCP Gateway: Kong introduces advanced new LLM Gateway functionality
Kong AI Gateway isn’t just an MCP Gateway. It started as and continues to be the most advanced and feature-rich LLM Gateway on the market.
And this is crucial for both MCP and LLM use cases, and both will rely on each other and both are critical parts of the AI data path that need building, running, discovering, and governing.
We’ve already discussed our MCP updates. Now it’s time to turn attention to the breakthrough new LLM Gateway value that was added in 3.12 with the new LLM as a Judge policy/plugin, GCP Model armor integration, and more. Let’s dig in!
Bolster LLM output and quality with the LLM as a Judge policy
As organizations deploy AI applications and agents at scale, ensuring output quality and safety becomes paramount. However, traditional rule-based validation struggles to evaluate the nuanced, natural language responses that LLMs generate. The "LLM as a Judge" approach addresses this by leveraging a separate LLM instance to assess the quality, accuracy, relevance, and safety of primary LLM outputs before they reach end users or trigger downstream actions.
This approach enables sophisticated evaluation criteria that would be impractical to encode as static rules: detecting hallucinations, verifying logical consistency, assessing tone appropriateness, and identifying potential policy violations. The trick here is governance. Like any other best practice and approach, “LLM as a Judge” must be implemented across the organization consistently to drive the most value and deliver maximum LLM confidence – something that will only become more and more important in an MCP, AI agent, and AI coding assistant world.
Your team now has the power of this consistency in your Kong AI Gateway with the new LLM and a Judge policy/plugin. When enabled, Kong AI Gateway will leverage a third-party LLM to evaluate responses from proxied LLMs to determine quality. From here, the AI Gateway can:
- Filter problematic outputs
- Route responses for human review
- Continuously improve MCP and agentic workflow reliability
And it can do all of this consistently and without sacrificing velocity by designing and enforcing "LLM as a Judge policy" as automated guardrails leveraging Kong’s industry-leading automation solutions.
Strengthen your AI security posture with the new GCP Model Armor integration
Google Cloud's Model Armor represents a significant advancement in enterprise AI safety, providing sophisticated content filtering, PII detection, and adversarial attack prevention specifically designed for production LLM deployments.
Kong AI Gateway's new native integration with Model Armor enables organizations to leverage Google's enterprise-grade safety controls without building custom middleware or introducing additional latency into their AI workflows. This integration is particularly valuable for enterprises already invested in the Google Cloud ecosystem, as it allows security policies configured in Model Armor to be enforced consistently across all LLM traffic flowing through Kong — whether targeting Google's Vertex AI models or third-party providers.
By centralizing Model Armor enforcement at the gateway layer, organizations gain unified protection across their entire multi-model AI infrastructure while maintaining the flexibility to apply differentiated safety policies based on use case, user context, or compliance requirements.
More improvements to the LLM Gateway
While the LLM as a Judge policy and Model Armor integration top the list of LLM Gateway highlights, we introduced more important functionality in AI Gateway 3.12:
- PII sanitization now works on the response as well as on the request: You can sanitize both incoming requests and outgoing responses to and from an LLM, ensuring that no PII makes it into a model or out of an already-compromised model.
- You can now use AWS MemoryDB as an additional vector storage system in addition to Redis.
Where can you learn more?
If you aren’t already investing in an AI Gateway strategy, you’re likely behind others in your space. Luckily, Kong offers what you need to get started and rapidly ship AI workflows into production. If you want to learn more, check out the AI Gateway docs or reach out to your main Kong’s point of contact. We look forward to working with you and seeing what AI applications and agents you build on top of Kong’s advanced LLM and MCP infrastructure.
We aren’t just building MCP support into the AI Gateway. We’ve also announced brand-new MCP support across other areas of Konnect that we highly recommend you check out.
- MCP consumption and production flows with new MCP-enabled AI coding tool access to your Developer Portals
- MCP integration and composition in Konnect
- The new Konnect MCP server webpage
Check out the blog and webpage, or reach out to your Kong point of contact to learn more.