Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
  1. Home
  2. Blog
  3. Enterprise
  4. From Browser to Prompt: Building Infra for the Agentic Internet
Enterprise
November 13, 2025
8 min read

From Browser to Prompt: Building Infra for the Agentic Internet

Amit Dey
Content Manager, Kong

A burgeoning cutting-edge technology has been fundamentally transforming how we build automation inside disruptive businesses: agentic AI. The impact of agentic AI is already shaping up to be massive. And agentic adoption is soaring: Kong's Agentic AI in the Enterprise report found that, of those with visibility into their organization’s plans, 90% say their companies are actively adopting AI agents.

Agentic AI is driving a sea change in how customers interact with information, products, and services online. And this change is being driven by the AI prompt, the smartest browser we've ever seen.

What powers AI prompts?

A close examination of what really powers the AI prompt unveils two technologies: the large language models (LLMs) that empower agents with intelligence and the ecosystem of MCP tools to deliver capabilities to the agents. While LLMs make your agents smart, it would be hard to provide agents with the advanced capabilities without the ecosystem of MCP tools. AI agents can be smart, but they won’t be capable without the ecosystem of MCP tools. 

The MCP tools allow agents to hook into additional context, like third-party tools, services, and data to accomplish their tasks and deliver an experience that you’re trying to build. When agents don’t have this type of integration, it delivers a poor user experience. This is why businesses need to hook into additional context through the ecosystem of MCP tools. 

The challenge is that, at a high level, we focus too much on benchmarks and how smart models are while we neglect or ignore developing this ecosystem of MCP tools that will make our AI agents successful. Put simply, the success of agents hinges on the convergence of choosing smart LLMs and developing a robust ecosystem of MCP tools. 

We often worry about how fast we can get to the point of taking advantage of agentic opportunities. But whether building agents to automate internal processes or build better customer experiences, it turns out building agents isn't as simple as it might seem on paper. In practice, there are multiple decisions we need to make for our agents to work and (most importantly) to work safely in an enterprise environment.

Building agents is so challenging that 95% of generative AI initiatives go nowhere, according to a report published by MIT. The reasons behind these failures stem partly from the lack of an ecosystem of MCP tools and partly from the absence of AI infrastructure for developers to build agents successfully. 

How to build infrastructure for the agentic internet

In the wake of this agentic AI revolution, we need to make two important decisions. One, what LLM should you use? And two, how do you plan to develop an ecosystem of MCP tools? But this is just the beginning. 

What follows is a lengthy chain of cause and effect, like the technological equivalent of If You Give a Mouse a Cookie. 

  • Obviously, as you start making requests to either the models or the MCP tools, we should ensure that security is in place. We'll want to set up the right guardrails to moderate the behavior of the prompts that you would send to the models. And we also want to secure the MCP servers that we're exposing for the agents to use. 
  • After deciding how we want to secure our infrastructure, we'll want to lower the rate of agent hallucination. We can help reduce hallucinations with RAG pipelines and improve the reliability of results. 
  • Then, if we're using customer data, we need to scrub and sanitize PII to ensure LLMs (which aren't the best at keeping secrets) don't disclose highly sensitive data. 
  • Then, we want to observe and collect metrics about the models themselves and in the MCP tools.

And it keeps going. There are so many cross-cutting requirements that must be met for us to support agentic AI at scale in an enterprise environment. This is where the right AI infrastructure can truly help.

Learn more about the hidden AI fragmentation tax and the three root causes of AI cost chaos.

Build agents better with an AI gateway

Building agents is difficult, but it can be easy with Kong AI Gateway. Kong AI Gateway was the first enterprise-grade AI infrastructure product. It ships with more than 80 capabilities for AI and helps with managing traffic across models and APIs in the ecosystem of the MCP tools that help make the agents successful. 

Our customers have been using Kong AI Gateway for three main reasons. 

  1. To develop and nurture a vast MCP/API ecosystem
  2. To build secure LLM infrastructure to consume any model that you want to use for your agentic capability
  3. To continuously monitor and observe — to capture the full observability and the metrics and the traces the LLMs, agents, and MCP tools are generating across the board

While 95% may be seeing their agentic initiatives struggle to take off, one of our customers, Prudential, is among the 5% building and using agentic AI successfully.

Prudential has been increasing developer productivity, reducing duplication and waste, and modernizing legacy applications — including many that may have been left over from previous transformation pushes because they were too complex — with the help of agentic AI.

"We're leaning all in on agentic code development," said Elizabeth Brand, VP, Global Head of Cloud, Prudential. "How can we refactor our legacy apps? What are the apps we haven't looked at in years? And what do we not know about them? How can we use the technology to transform our thinking about that application? And how can we decompose that application into smaller components?" 

"There's a significant opportunity for us to reduce the amount of complexity in our environment, all while taking advantage and transforming the applications we didn't get to because they were hard — they were complex," Brand said. "Now we have the tools to take on these difficult problems."

What’s Kong MCP Gateway?

As Augusto Marietti, CEO and Co-founder of Kong, likes to put it, “MCP is Duolingo for APIs. It makes them speak English.” Another good metaphor for MCP is that it's the USB-C for AI that effectively helps our LLMs and agents connect to third-party systems, services, and data.

While we can use traditional APIs to build agents, MCP has emerged as a new protocol to help build agents in a much better way. Here’s why:

  • Understood by agents: MCP is natively understood by agents. MCP is for AI what REST was for APIs. 
  • Irrelevant versioning: MCP makes versioning less relevant in the sense that our models are better at handling unstructured data. This makes versioning less challenging than it used to be in the traditional RESTful API space. 
  • Real-time by default: When we're building agents, we want to build compelling experiences, and it's important that real-time experiences are baked into the underlying protocols. 

That said, MCP is not without its challenges.

  • High R&D cost: MCP is a new and emerging protocol, which requires high research and development costs..
  • Security: Just like API security, MCP tools and servers need to be secured in sync with the MCP specifications.  
  • Slow to build: Finally, the velocity that it takes to build MCP servers and the ecosystem is slow. In contrast, we need to be quick to capture these agentic AI opportunities.

Considering this, Kong introduced Kong AI Gateway 3.12 with a new MCP Gateway capability, in addition to all other capabilities that have already been there inside the product. 

Managing the entire MCP lifecycle 

Kong MCP Gateway ensures MCP governance, autogeneration, security, and observability.

  • MCP governance: Kong AI Gateway 3.12 ships with built-in MCP governance, enabling you to govern, secure, and expose MCP servers to agents. This allows you to create tiers of access, assign limits, and determine how agents discover and consume the MCP.
  • MCP autogeneration: Autogenerating MCP servers is key to quickly leveraging agentic AI opportunities. While MCPs can be built from scratch, they are easily autogenerated from existing, Kong-managed RESTful APIs. This capability accelerates the creation of an MCP tool ecosystem alongside agent development, allowing any Kong-managed RESTful API to be instantly converted into an MCP server without developer involvement.
  • MCP security: We shipped new MCP security capabilities that protect servers from consuming clients. Instead of developers individually adding security to each MCP server, they can use a plugin to standardize security across the ecosystem, allowing them to focus on building top-tier MCP servers.
  • MCP observability: With our MCP observability, you can monitor what models are being used, how many tokens are being generated, and which agents are using most tokens or making most of the requests, in addition to LLM capabilities. You can also generate MCP analytics to understand what tools are being consumed, what the latency is, and how fast and performant these tools are. You can extract all these MCP and LLM metrics and connect them to Konnect Analytics and/or any third-party platform that you may be using today. Kong supports 12+ different connectors and gives you a complete picture of full observability. 

Build MCP-powered AI agents in minutes 

Kong introduced a new SDK (software development kit), Volcano that helps build agents with just a few lines of code. 

Building agents with Volcano is as simple as creating the flows that you want the agents to perform, determining what LLMs and MCP servers you want to inject in each one of these flows, and Volcano will automatically discover the right method to invoke and do all the heavy lifting for you. This is an open source SDK and helps developers natively build agents. 

We’ve been using Volcano at Kong for creating all sorts of business processes and automation for end users and customer experience. You can start building agents today using volcano.dev. 

Learn more at AWS re:Invent 2025

To learn more about building infrastructure for the agentic internet, join Kong’s Breakout Session at AWS reInvent 2025, “Building a State of the Art Agentic Infrastructure,” presented by Kong’s Head of Product Marketing, Alex Drag.

Beyond Alex's session, we'll also be hosting executive meetings and bringing together the community for meaningful conversations about modern API and AI strategies. Join us at re:Invent for:

  • Scheduled demos at Kong's booth, #1833
  • Exclusive after-hours events, including a private cocktail reception and partner happy hour
  • One-on-one meetings with Kong execs

For more details and to sign up for a meeting, the partner happy hour, or cocktail reception, check out the Kong re:Invent 2025 page.

Topics:AI Gateway
|
Agentic AI
|
Enterprise AI
|
MCP
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance•
  • © Kong Inc. 2025