When a Kong customer, a large financial services firm, decided to scale Claude Code access from 1,000 to 5,000 developers, the initiative could have been hard to manage without the right infrastructure. What worked for a controlled pilot cohort would buckle under the weight of five times the users, dozens of internal data sources, and the regulatory scrutiny that follows every technology decision in financial services.
The answer came in routing all MCP server access through Kong AI Gateway — a single, governed point that gave the platform team visibility and control.
Rather than allowing developers to connect directly to MCP endpoints in ad-hoc ways, Kong enforced authentication, applied rate limits, and logged every tool invocation, turning what could have been a sprawling shadow-AI problem into a managed, auditable system that compliance and security teams could actually get behind.
What will help make the program robust is MCP adoption metrics built on top of Kong's telemetry. By surfacing which MCP tools are called most frequently, which teams were driving the highest utilization, and where latency issues could be introducing friction, the platform team can make targeted interventions rather than guessing. High-usage tools can be prioritized for performance optimization; low-adoption teams can get dedicated support.
The metrics also gave engineering leadership the language to have honest, data-driven conversations with stakeholders. Not just saying, "We're rolling out AI." But giving a clearer picture of where it’s working and where it isn't.