Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Enterprise
  4. From Browser to Prompt: Building Infra for the Agentic Internet
Enterprise
November 13, 2025
7 min read

From Browser to Prompt: Building Infra for the Agentic Internet

Amit Dey
Content Manager, Kong

A burgeoning cutting-edge technology has been fundamentally transforming how we build automation inside disruptive businesses: agentic AI. The impact of agentic AI is already shaping up to be massive. And agentic adoption is soaring: Kong's Agentic AI in the Enterprise report found that, of those with visibility into their organization’s plans, 90% say their companies are actively adopting AI agents.

Agentic AI is driving a sea change in how customers interact with information, products, and services online. And this change is being driven by the AI prompt, the smartest browser we've ever seen.

What powers AI prompts?

A close examination of what really powers the AI prompt unveils two technologies: the large language models (LLMs) that empower agents with intelligence and the ecosystem of MCP tools to deliver capabilities to the agents. While LLMs make your agents smart, it would be hard to provide agents with the advanced capabilities without the ecosystem of MCP tools. AI agents can be smart, but they won’t be capable without the ecosystem of MCP tools. 

The MCP tools allow agents to hook into additional context, like third-party tools, services, and data to accomplish their tasks and deliver an experience that you’re trying to build. When agents don’t have this type of integration, it delivers a poor user experience. This is why businesses need to hook into additional context through the ecosystem of MCP tools. 

The challenge is that, at a high level, we focus too much on benchmarks and how smart models are while we neglect or ignore developing this ecosystem of MCP tools that will make our AI agents successful. Put simply, the success of agents hinges on the convergence of choosing smart LLMs and developing a robust ecosystem of MCP tools. 

We often worry about how fast we can get to the point of taking advantage of agentic opportunities. But whether building agents to automate internal processes or build better customer experiences, it turns out building agents isn't as simple as it might seem on paper. In practice, there are multiple decisions we need to make for our agents to work and (most importantly) to work safely in an enterprise environment.

Building agents is so challenging that 95% of generative AI initiatives go nowhere, according to a report published by MIT. The reasons behind these failures stem partly from the lack of an ecosystem of MCP tools and partly from the absence of AI infrastructure for developers to build agents successfully. 

How to build infrastructure for the agentic internet

In the wake of this agentic AI revolution, we need to make two important decisions. One, what LLM should you use? And two, how do you plan to develop an ecosystem of MCP tools? But this is just the beginning. 

What follows is a lengthy chain of cause and effect, like the technological equivalent of If You Give a Mouse a Cookie. 

  • Obviously, as you start making requests to either the models or the MCP tools, we should ensure that security is in place. We'll want to set up the right guardrails to moderate the behavior of the prompts that you would send to the models. And we also want to secure the MCP servers that we're exposing for the agents to use. 
  • After deciding how we want to secure our infrastructure, we'll want to lower the rate of agent hallucination. We can help reduce hallucinations with RAG pipelines and improve the reliability of results. 
  • Then, if we're using customer data, we need to scrub and sanitize PII to ensure LLMs (which aren't the best at keeping secrets) don't disclose highly sensitive data. 
  • Then, we want to observe and collect metrics about the models themselves and in the MCP tools.

And it keeps going. There are so many cross-cutting requirements that must be met for us to support agentic AI at scale in an enterprise environment. This is where the right AI infrastructure can truly help.

Learn more about the hidden AI fragmentation tax and the three root causes of AI cost chaos.

Build agents better with an AI gateway

Building agents is difficult, but it can be easy with Kong AI Gateway. Kong AI Gateway was the first enterprise-grade AI infrastructure product. It ships with more than 80 capabilities for AI and helps with managing traffic across models and APIs in the ecosystem of the MCP tools that help make the agents successful. 

Our customers have been using Kong AI Gateway for three main reasons. 

  1. To develop and nurture a vast MCP/API ecosystem
  2. To build secure LLM infrastructure to consume any model that you want to use for your agentic capability
  3. To continuously monitor and observe — to capture the full observability and the metrics and the traces the LLMs, agents, and MCP tools are generating across the board

While 95% may be seeing their agentic initiatives struggle to take off, one of our customers, Prudential, is among the 5% building and using agentic AI successfully.

Prudential has been increasing developer productivity, reducing duplication and waste, and modernizing legacy applications — including many that may have been left over from previous transformation pushes because they were too complex — with the help of agentic AI.

"We're leaning all in on agentic code development," said Elizabeth Brand, VP, Global Head of Cloud, Prudential. "How can we refactor our legacy apps? What are the apps we haven't looked at in years? And what do we not know about them? How can we use the technology to transform our thinking about that application? And how can we decompose that application into smaller components?" 

"There's a significant opportunity for us to reduce the amount of complexity in our environment, all while taking advantage and transforming the applications we didn't get to because they were hard — they were complex," Brand said. "Now we have the tools to take on these difficult problems."

What’s Kong MCP Gateway?

As Augusto Marietti, CEO and Co-founder of Kong, likes to put it, “MCP is Duolingo for APIs. It makes them speak English.” Another good metaphor for MCP is that it's the USB-C for AI that effectively helps our LLMs and agents connect to third-party systems, services, and data.

While we can use traditional APIs to build agents, MCP has emerged as a new protocol to help build agents in a much better way. Here’s why:

  • Understood by agents: MCP is natively understood by agents. MCP is for AI what REST was for APIs. 
  • Irrelevant versioning: MCP makes versioning less relevant in the sense that our models are better at handling unstructured data. This makes versioning less challenging than it used to be in the traditional RESTful API space. 
  • Real-time by default: When we're building agents, we want to build compelling experiences, and it's important that real-time experiences are baked into the underlying protocols. 

That said, MCP is not without its challenges.

  • High R&D cost: MCP is a new and emerging protocol, which requires high research and development costs..
  • Security: Just like API security, MCP tools and servers need to be secured in sync with the MCP specifications.  
  • Slow to build: Finally, the velocity that it takes to build MCP servers and the ecosystem is slow. In contrast, we need to be quick to capture these agentic AI opportunities.

Considering this, Kong introduced Kong AI Gateway 3.12 with a new MCP Gateway capability, in addition to all other capabilities that have already been there inside the product. 

Managing the entire MCP lifecycle

Kong MCP Gateway ensures MCP governance, autogeneration, security, and observability.

  • MCP governance: Kong AI Gateway 3.12 ships with built-in MCP governance, enabling you to govern, secure, and expose MCP servers to agents. This allows you to create tiers of access, assign limits, and determine how agents discover and consume the MCP.
  • MCP autogeneration: Autogenerating MCP servers is key to quickly leveraging agentic AI opportunities. While MCPs can be built from scratch, they are easily autogenerated from existing, Kong-managed RESTful APIs. This capability accelerates the creation of an MCP tool ecosystem alongside agent development, allowing any Kong-managed RESTful API to be instantly converted into an MCP server without developer involvement.
  • MCP security: We shipped new MCP security capabilities that protect servers from consuming clients. Instead of developers individually adding security to each MCP server, they can use a plugin to standardize security across the ecosystem, allowing them to focus on building top-tier MCP servers.
  • MCP observability: With our MCP observability, you can monitor what models are being used, how many tokens are being generated, and which agents are using most tokens or making most of the requests, in addition to LLM capabilities. You can also generate MCP analytics to understand what tools are being consumed, what the latency is, and how fast and performant these tools are. You can extract all these MCP and LLM metrics and connect them to Konnect Analytics and/or any third-party platform that you may be using today. Kong supports 12+ different connectors and gives you a complete picture of full observability. 

Build MCP-powered AI agents in minutes

Kong introduced a new SDK (software development kit), Volcano that helps build agents with just a few lines of code. 

Building agents with Volcano is as simple as creating the flows that you want the agents to perform, determining what LLMs and MCP servers you want to inject in each one of these flows, and Volcano will automatically discover the right method to invoke and do all the heavy lifting for you. This is an open source SDK and helps developers natively build agents. 

We’ve been using Volcano at Kong for creating all sorts of business processes and automation for end users and customer experience. You can start building agents today using volcano.dev. 

AI GatewayAgentic AIEnterprise AIMCP

Table of Contents

  • What powers AI prompts?
  • How to build infrastructure for the agentic internet
  • Build agents better with an AI gateway
  • Managing the entire MCP lifecycle
  • Build MCP-powered AI agents in minutes

More on this topic

Demos

Securing Enterprise LLM Deployments: Best Practices and Implementation

Videos

Context‑Aware LLM Traffic Management with RAG and AI Gateway

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
AI GatewayAgentic AIEnterprise AIMCP
Share on Social
Amit Dey
Content Manager, Kong

Recommended posts

The AI Governance Wake-Up Call

Kong Logo
EnterpriseDecember 12, 2025

Companies are charging headfirst into AI, with research around agentic AI in the enterprise finding as many as 9 out of 10 organizations are actively working to adopt AI agents.  LLMs are being deployed, agentic workflows are getting created left

Taylor Hendricks

AI Agent with Strands SDK, Kong AI/MCP Gateway & Amazon Bedrock

Kong Logo
EngineeringJanuary 12, 2026

In one of our posts, Kong AI/MCP Gateway and Kong MCP Server technical breakdown, we described the new capabilities added to Kong AI Gateway to support MCP (Model Context Protocol). The post focused exclusively on consuming MCP server and MCP tool

Jason Matis

New Year, New Unit Economics: Konnect Metering & Billing Is Here

Kong Logo
Product ReleasesJanuary 6, 2026

Every January, the same resolutions show up: eat better, exercise more, finally learn that language, finally figure out that production use case for AI agents (OK, this one isn’t so typical unless you operate in our universe).  But if you're respons

Alex Drag

Move More Agentic Workloads to Production with AI Gateway 3.13

Kong Logo
Product ReleasesDecember 18, 2025

MCP ACLs, Claude Code Support, and New Guardrails New providers, smarter routing, stronger guardrails — because AI infrastructure should be as robust as APIs We know that successful AI connectivity programs often start with an intense focus on how

Greg Peranich

How to Master AI/LLM Traffic Management with Intelligent Gateways

Kong Logo
EnterpriseMay 26, 2025

As businesses increasingly harness the power of artificial intelligence (AI) and large language models (LLMs), a new challenge emerges: managing the deluge of AI requests flooding systems. This exponential growth in AI traffic creates what could be

Kong

How the Rise of Agentic AI is Transforming API Development and Management

Kong Logo
EnterpriseMay 20, 2025

The world of artificial intelligence is undergoing a seismic shift, with the emergence of agentic AI redefining the landscape of API development and management. As businesses and developers navigate the complexities of digital transformation, unde

Kong

Kong AI Gateway and the EU AI Act: Compliance Without the Rewrites

Kong Logo
EnterpriseNovember 26, 2025

The Requirement : Article 10 of the EU AI Act mandates strict data governance for high-risk AI systems. This includes error detection, bias monitoring, and arguably most critically for enterprise use — ensuring that sensitive personal data (PII) is

Jordi Fernandez Moledo

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026