Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. AI Agent with Strands SDK, Kong AI/MCP Gateway & Amazon Bedrock
Engineering
January 12, 2026
18 min read

AI Agent with Strands SDK, Kong AI/MCP Gateway & Amazon Bedrock

Claudio Acquaviva
Principal Architect, Kong
Jason Matis
Staff Solutions Engineer, Kong

In one of our posts, Kong AI/MCP Gateway and Kong MCP Server technical breakdown, we described the new capabilities added to Kong AI Gateway to support MCP (Model Context Protocol). The post focused exclusively on consuming MCP server and MCP tools through Kong MCP Gateway. Now, it's time to check how an AI agent can leverage the AI and MCP infrastructure exposed and protected by Kong AI/MCP Gateway.

Strands Agents, the Python-based framework for building agents, was introduced by AWS in May, 2025. Strands makes the integration with tools, GenAI models and services straightforward by providing a consistent way for agents to interact with external systems. It simplifies how developers orchestrate tools, gather context, and orchestrate reasoning, turning complex multi-service workflows into maintainable, event-driven agent logic.

At the infrastructure layer, the Kong AI/MCP Gateway extends these capabilities by securely exposing GenAI model endpoints, AI tools and enterprise APIs through standardized GenAI and MCP interfaces. When combined with Amazon Bedrock, which offers scalable access to leading foundation models, the result is an architecture where agents can reliably call Bedrock models and MCP based tools through a unified Gateway.

Kong AI/MCP Gateway and Kong MCP Server technical breakdown also discussed the fundamental MCP abstractions and how Kong AI/MCP Gateway addresses them. However, that post doesn't show how to leverage the gateway to implement an AI agent.

In this post, we explore how Strands SDK, Kong AI/MCP Gateway and Amazon Bedrock can work together to build robust, production-ready AI agents.

Kong MCP Gateway introduction

In October 2025, Kong announced Kong Gateway 3.12 with several new capabilities. In a nutshell, this version opens a new chapter in Kong's AI story: the introduction of the Kong MCP Gateway. The following diagram illustrates the new component:

Similarly to what the Kong AI Gateway does with the GenAI models, the LLMs we consume, the Kong MCP Gateway protects and controls the MCP servers consumption.

The new MCP Gateway allows you, just like you do with LLMs and other GenAI models with Kong AI Gateway, to host your MCP Servers sitting behind the Gateway and, therefore, leverage the capabilities provided by it. Which includes:

  • Specific MCP-based security mechanisms like OAuth 2.1.
  • Combine historical Kong API Gateway plugins to implement policies like Transformations, Security, etc.
  • Take advantage of the Observability plugins provided by the Kong API Gateway and Konnect.

Kong AI Gateway and Amazon Bedrock

Let's discuss the benefits Kong AI Gateway brings to Amazon Bedrock for an LLM-based application. As mentioned before, Kong AI Gateway leverages the existing Kong API Gateway extensibility model to provide specific AI-based plugins. Here are some values provided by Kong AI Gateway and its plugins:

  • AI Proxy and AI Proxy Advanced plugins: The Multi-LLM capability allows the AI Gateway to abstract Amazon Bedrock (and other LLMs as well) load balancing models based on several policies including latency time, model usage, semantics, etc. These plugins extract LLM observability metrics (like number of requests, latencies, and errors for each LLM provider) and the number of incoming prompt tokens and outgoing response tokens. All this is in addition to hundreds of metrics already provided by Kong Gateway on the underlying API requests and responses. Finally, Kong AI Gateway leverages the observability capabilities provided by Konnect to track Amazon Bedrock usage out of the box, as well as generate reports based on monitoring data.
  • Prompt Engineering:

    • AI Prompt Template plugin, responsible for pre-configuring AI prompts to users
    • AI Prompt Decorator plugin, which injects messages at the start or end of a caller's chat history
    • AI Prompt Compressor
    • AI Prompt Guard plugin lets you configure a series of PCRE-compatible regular expressions to allow and block specific prompts, words, phrases, or otherwise and have more control over a LLM service, controlled by Amazon Bedrock.
    • AI Semantic Prompt Guard plugin to self-configure semantic (or natural-language based pattern-matching) prompt protection
  • AI RAG Injector
  • AI Semantic Response Guard
  • AI Semantic Cache plugin caches responses based on threshold, to improve performance (and therefore end-user experience) and cost
  • AI Rate Limiting Advanced: You can tailor per-user or per-model policies based on the tokens returned by the LLM provider under Amazon Bedrock management or craft a custom function to count the tokens for requests.
  • AI Request Transformer and AI Response Transformer plugins seamlessly integrate with the LLM on Amazon Bedrock, enabling introspection and transformation of the request's body before proxying it to the upstream service and prior to forwarding the response to the client.
  • AI AWS Guardrails
  • AI LLM as Judge
  • AI PII Sanitizer

Besides, the Kong AI/MCP Gateway use cases can combine policies implemented by hundreds of Kong Gateway plugins, such as:

  • Authentication and authorization: OIDC, mTLS, API Key, LDAP, SAML, Open Policy Agent (OPA)
  • Traffic control: Request Validator and Size Limiting, WebSocket support, Route by Header, etc. 
  • Observability: OpenTelemetry (OTel), Prometheus, TCP-Log, etc.

Also, from the architecture perspective, in brief, the Konnect Control Plane and Data Plane nodes topology remains the same.

By leveraging the same underlying core of Kong API Gateway, we're reducing complexity in deploying the AI/MCP Gateway capabilities as well. And of course, it works on Konnect, Kubernetes, self-hosted, or across multiple clouds.

Kong AI/MCP Gateway and Amazon Bedrock integration and APIs

Let's take a look at the specific integration point between Kong AI/MCP Gateway and Amazon Bedrock. To get a better understanding, here's the architecture cut isolating both:

The consumer can be any RESTful-based component; in our case, it will be a Strands agent.

As you can see, there are two important topics here.

  • OpenAI API specification
  • Amazon Bedrock Converse API and EKS Pod Identity

Let's discuss each one of them.

OpenAI API support

Kong AI Gateway supports the OpenAI API specification. That means the consumer can send standard OpenAI requests to the Kong AI Gateway. As a basic example, consider this OpenAI request:

When we add Kong AI Gateway, sitting in front of Amazon Bedrock, we're not just exposing it but also allowing the consumers to use the same mechanism — in this case, OpenAI APIs — to consume it. That leads to a very flexible and powerful capability when we come to development processes. In other words, Kong AI Gateway normalizes the consumption of any LLM infrastructure, including Amazon Bedrock, Mistral, OpenAI, Cohere, etc.

As an exercise, the new request should be something like this. The request has some minor differences:

  • It sends a request to the Kong API Gateway Data Plane Node.
  • It replaces the OpenAI endpoint with a Kong API Gateway route.
  • The API Key is actually managed by the Kong API Gateway now.
  • We're using an Amazon Bedrock Model, us.amazon.nova-micro-v1:0.

Amazon EKS Pod Identity

The Konnect Data Plane Node, where the Kong AI Gateway runs, has to send requests to Amazon Bedrock on behalf of the Gateway consumer. In order to do it, we need to grant permissions to the Data Plane deployment to access the Amazon Bedrock API, more precisely the Converse API, used by the AI Gateway to interact with it.

For example, here's a request to Amazon Bedrock, using the AWS CLI. Use "aws configure" first to set your access and secret key as well as the AWS region you want to use, then run the command.

It's reasonable to consume Bedrock service with local CLI commands. However, for Amazon EKS deployments, the recommended approach is EKS Pod Identity, instead of simple long-term credentials like AWS Access and Secret Keys. In short, EKS Pod Identity allows the Data Plane Pod's container to use the AWS SDK and send API requests to AWS services using AWS Identity and Access Management (IAM) permissions.

Amazon EKS Pod Identity associations provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances.

As another best practice, we recommend the Private Key and Digital Certificate pair used by the Konnect Control Plane and the Data Plane connectivity to be stored in AWS Secrets Manager. In this sense, the Data Plane deployment refers to the secrets to get installed.

Amazon Elastic Kubernetes Service (EKS) installation and preparation

Now, we're ready to get started with our Kong AI Gateway deployment. As the installation architecture defines, it'll be running on an EKS Cluster.

Amazon EKS Cluster creation

In order to create the EKS Cluster, you can use eksctl, the official CLI for Amazon EKS, like this:

Note command creates an EKS Cluster, version 1.34, with a single node based on the g6.xlarge instance type, powered by NVIDIA GPUs. That is particularly interesting if you're planning to deploy and run LLMs locally in the EKS Cluster.

Cluster preparation

After the installation, we should prepare the cluster to receive the other components:

  • AWS Load Balancer Controller to expose both Kong AI Gateway with public Network Load Balancers (NLB).
  • EKS Pod Identity Agent to be able to define Pod Identity Association and grant permissions to the Kong AI Gateway Data Plane Pod to access both Amazon Bedrock and AWS Secrets Manager.

Refer to the official documentation to learn more about the components and their installation processes.

Kubernetes Kong Operator and Konnect Control Plane/Data Plane installation

The Kong AI/MCP Gateway deployment process can be divided into two steps:

  • Pod Identity configuration
  • Kong Control Plane and Data Plane deployment using the Kubernetes Kong Operator

Pod Identity configuration

In this first step, we configure EKS Pod Identity describing which AWS Services the Data Plane Pods should be allowed to access. In our case, we need to consume Amazon Bedrock and AWS Secrets Manager.

IAM Policy

Pod Identity relies on IAM policies to check which AWS Services can be consumed. Our policy should allow access to AWS Bedrock actions so the Data Plane will be able to send requests to Bedrock APIs, more precisely, Converse and ConverseStream APIs. The Converse API requires permission to the InvokeModel action as ConverseStream needs access to InvokeModelWithResponseStream.

Also, we're going to use AWS Secrets Manager to store our Private Key and Digital Certificate pair, which the Konnect Control Plane and Data Plane used to communicate.

Considering all this, let's create the IAM policy with the following request:

Pod Identity Association

Pod Identity takes a Kubernetes Service Account to manage the permissions. So create the Kubernetes namespace for the Kong Data Plane deployment and a simple service account inside of it.

Now we're ready to create the Pod Identity Association. We use the same eksctl command to do it:

The command above is responsible for:

  • IAM Role creation based on the IAM Policy we previously defined
  • Associating the IAM Role to the existing Kubernetes Service Account

You can check the Pod Identity Association with:

Check the IAM Role and Policies attached with:

Kong Operator and Control Plane/Data Plane deployment

The Data Plane deployment comprises the following steps:

  • Konnect subscription
  • Kong Operator installation
  • Konnect Control Plane creation
  • Konnect Data Plane deployment

Konnect subscription

This fundamental step is required to get access to Konnect. Click on the Registration link and present your credentials. Or, if you already have a Konnect subscription, log in to it.

Any Konnect subscription has a "default" Control Plane defined. You can proceed using it or optionally create a new one. The following instructions are based on a new Control Plane.

Kong Operator installation

The Konnect Control Plane and Data Plane creation and deployments are totally managed by the Kong Operator (KO) which is fully compliant with the Kubernetes Operator standards. First, we need to install it. Check the documentation to learn more.

You can check the Operator’s log with:


Konnect Control Plane creation

In order to start using the Kong Operator, you need to issue a Konnect Personal Access Token (PAT) or a System Access Token (SAT). To generate your PAT, go to Konnect UI, click on your initials in the upper right corner of the Konnect home page, then select "Personal Access Tokens." Click on "+ Generate Token," name your PAT, set its expiration time, and be sure to copy and save it as an environment variable also named as PAT. Konnect won’t display your PAT again.

Now, you can create your Control Plane with the first Kong Operator declaration. The first CRD tells the Operator which Konnect region you’re using, and what token (PAT or SAT) to use to authenticate.

The second CRD creates the new Control Plane:

Konnect Data Plane deployment

Finally, in the last step we will deploy the Data Plane. The following KonnectExtension CRD allows you to define your Konnect Control Plane details. The DataPlane CRD actually creates the Konnect Data Plane, attaching it to the Control Plane.

Note the DataPlane declaration:

  • Adds the service annotation to request a public NLB for the Data Plane.
  • Uses the Kubernetes Service Account that has been used to create the Pod Identity Association, so the Data Plane can have access to both Amazon Bedrock and Secrets Manager.

Checking the Data Plane

Use the Load Balancer created during the deployment:

You should get a response like this:

Now we can define the Kong Objects necessary to expose and control Bedrock, including Kong Gateway Service, Routes, and Plugins.

decK

With decK (declarations for Kong) you can manage Kong Konnect configuration and create Kong Objects in a declarative way. decK state files describe the configuration of Kong API Gateway. State files encapsulate the complete configuration of Kong in a declarative format, including services, routes, plugins, consumers, and other entities that define how requests are processed and routed through Kong. Please check the decK documentation to learn how to install it.

You can ping Konnect using your PAT with:

Strands Agent SDK - AI agent and MCP fundamentals

With all the components we need for our AI agent in place, it's time for the most exciting part of this post: the Strands agent itself. Let's start with a very basic code:

The code is straightforward. The diagram shows the two components:

Strands has an extensive list of Model Providers where Amazon Bedrock is one of them. The provider requires AWS credentials which, for this code, are set in the environment variables.


The code instantiates an agent which sends a simple prompt to the model. Nothing is really special until you send prompts like:

If you do, you'll get a response like:

MCP principles

That's one of the main reasons why we should add tools to our agent. In fact, LLMs are not able to process prompts like this without setting some context. By context, we mean artifacts like transcripts or documents, presentations or functions so the LLM can respond accordingly. That's the purpose of MCPs: to provide a standardized mechanism for LLMs to access some context.

As defined in the documentation, there are three core primitives that a MCP server can expose:

  • Tools: Executable functions that AI applications can invoke to perform actions (e.g., file operations, API calls, database queries)
  • Resources: Data sources that provide contextual information to AI applications (e.g., file contents, database records, API responses)
  • Prompts: Reusable templates that help structure interactions with language models (e.g., system prompts, few-shot examples)

Now, before adding tools to our agent, let's create a Kong version for it.

Kong version

Kong declarations

Kong needs to be configured to understand how to connect and consume Bedrock. Here's the decK declaration saying so:

The declaration defines multiple Kong Objects:

  • Kong Gateway Service named "agent-service". The service doesn’t need to map to any real upstream URL. In fact, it can point somewhere empty, for example, http://localhost:32000. This is because the AI Proxy plugin, also configured in the declaration, overwrites the upstream URL. This requirement will be removed in a later Kong revision.

  • Kong Route: The gateway Service has a route defined with the "/bedrock-route" path. That's the route we're going to consume to reach out to Bedrock.

  • Kong Route Plugins: The Kong Route has some plugins configured. Note that only the AI Proxy and Key Auth Plugins are enabled. The other ones are configured but disabled.

    • Kong AI Proxy Advanced Plugin: That's the plugin that allows us to connect to the LLM infrastructure. For Bedrock, among other things, we need to configure which AWS region we should connect to.

The declaration has been tagged as "agent" so you can manage its objects without impacting any other ones you might have created previously. Also, note the declaration is saying it should be applied to the "kong-aws" Konnect Control Plane.

You can submit the declaration to your Konnect Control Plane with the following decK command:

First Strands Kong AI Gateway agent

A very noticeable improvement in this updated Kong version is that, since we have Pod Identity configured in our EKS Cluster, we don't need to set up AWS credentials. Besides, as Kong exposes itself as an OpenAI complaint server, the agent uses the OpenAI Model Provider to talk to it.

Here's the new topology:

As all requests are reported back to the Konnect Control Plane, you can check them in the Analytics tab.

However, we don't have any tools defined so we still get a similar response if we send some weather related prompt.

Strands Tools Decorator

Tools can be added to agents using several techniques. The most basic one is Tool Decorator, which transforms a Python function into a Strands Tool. Our next Python code has some tools defined with decorators.

Here's the new diagram:

decK
All tools are sitting behind the Kong Data Plane, so, before running the code, we need to define the new Kong Objects. Here's the new decK declaration:

You can submit the declaration with:

The configuration defines regular Kong Gateway Services to each one of the external services:

  • Geolocation: Based on Google’s service. It can be consumed directly with requests like: 

  • Geocode: Based on Google's service. For example, if you submit the following request, with the latitude and longitude you got from the previous one, you should get a response like:

  • WeatherAPI: That's the actual service responsible for returning the weather in a given city. Here's an example:

  • Each Kong Route defined for the Kong Gateway Services is configured with the Request Transformer Advanced Plugin. Each plugin instance injects the corresponding API Keys defined in the decK environment variables key:${{ env "DECK_GOOGLEAPI_API_KEY" }} and key:${{ env "DECK_WEATHERAPI_API_KEY" }}
  • Lastly, we have configured the Post Function Plugin globally so we can check the raw bodies of all coming requests and responses.

Observability

From the Observability perspective, it's interesting to check the metrics and logs the Data Plane reports back to its Control Plane. For example, if you execute the code, invoking the agent with the prompt "What is the weather like in Tokyo?", you should see in the Konnect Analytics Explorer page the consumption of two Kong Gateway Services, “agent-service” and “weather-service”:

Note that the “agent-service” was called twice. That's where the Post Function plugin can be really helpful. If you check the Data Plane log, you should see all requests that have been processed. Let's check them now. You can do it with the following “kubectl” command:

Request and Response #1

In this first request sent through the Kong AI Gateway, the agent asks the Bedrock model, in our case us.anthropic.claude-sonnet-4-20250514-v1:0, as specified in the code, to tell which tool should be called to answer the prompt.


Note the Kong AI Gateway Proxy Advanced Plugin sends a request to Bedrock using the expected “/chat/completions". Here's the request body:

Bedrock replies back with a stream based response. If you check it, you'll see it instructs the agent to call the “get_weather” Tool passing “Tokyo” as the expected “geocode” parameter.

Request and Response #2

This request actually calls the second Kong Route “/weather”, which exposes the “weather-service” Gateway Service and, therefore, sends requests to the WeatherAPI external Service, injecting its API Key.

Request and Response #3

In the final request, the agent invokes the Kong AI Gateway, which routes the request to Bedrock model, to get the final response, now considering the context for the city the prompt included:

The response is also a stream and you can see it after executing the code.

Note that, considering the direct prompt, the agent just called only the “get_weather” tool. You may find it interesting to run the same agent code with different prompts like:

  • prompt = "What is the weather like in my location?"
  • prompt = "What is the city I'm currently located in?"
  • prompt = "What is the city located at latitude 52.52000659999999 and longitude 13.404954?"

Depending on the prompts the agent will invoke other tools to respond properly to it. For example, for the prompt:

The execution of the agent should look like this. This time the agent had to call two Tools: “get_user_code” and “get_weather”.

Strands MCP client and Kong AI MCP Proxy plugin

The current agent code is helpful but the tools are defined using Decorators and are not MCP based. That is, the agent integrates with Kong AI Gateway and Bedrock with the Strands OpenAI Model Provider but consumes the tools by using the “httpx” Python package which sends regular REST/HTTP based requests to the Kong Data Plane.

What we really want is to take the existing Kong Gateway Services and convert them as MCP tools. Moreover, the agent should call them as regular tools and not manage REST based calls. To illustrate the scenario, here's a new diagram:


On the Kong AI Gateway side, it's time to configure the AI MCP Proxy Plugin to take the existing Kong Gateway Service and create MCP Tools based on it. In order to do it, check the new decK declaration:

decK

The main difference here is that we have enabled the AI MCP Proxy Plugin to each one of our Kong Gateway Services related to the external services “geolocation”, “geocode” and “weather”.

In summary, as the AI Proxy Advanced plugin takes care of the LLM model connection, the AI MCP Proxy plugin provides similar capabilities to the MCP tools and external services.

Inside the “config” section, you can see the “mode: conversion-listener” configuration. That means the plugin will not just convert RESTful API paths into MCP tools but also accept incoming MCP requests on the Route path.

The “tools” section of the AI MCP Proxy declaration is an OpenAPI snippet. It's used to instruct how the plugin should integrate with the external service. It has the following main configuration parameters:

  • Method: It's related to the HTTP method the plugin should use to consume the external service.
  • Parameters: it maps the API parameters the external service expects.

Secrets

A second update we made in code is to add a vault to our Konnect environment to get the API Keys from AWS Secrets Manager. It provides a more secure and recommended environment to store secrets like the API Keys the Gateway Services use.

This is done in the “vault” section of the declaration. The secrets can be injected into your AWS Secrets Manager with:

You can check the secrets with:

Or even consuming the WeatherAPI Service:

The Code

This time the code takes the Bedrock model to be consumed from an environment variable. The most critical update is the construction of Streamable HTTP based MCP Client with the MCPClient package. Note that we have one client for each MCP tool defined by the AI MCP Proxy Plugin. To reach out to them, we use the same route path, e.g. “/weather”.

The agent object is created the same way we did before. The main difference is that the tools are based on the Streamable HTTP based MCP Client and not on Tool Decorators.

If you run the code above, you should see something like this. Note the agent had to call two tools this time: the first one to get the city name and the second one to get its actual weather.

Aggregate MCP tools

One last enhancement should be done in our code. As you can see, there is one MCP client for each MCP tool enabled by the Kong AI MCP Proxy plugin. That's not so optimized and does not abstract all tools in a single MCP server. That's the purpose of the AI MCP Proxy plugin “conversion-only” and “listener” modes. Here's the new abstraction:

decK

The decK basically sets the AI MCP Proxy plugin instances as “conversion-only”. That means the tool will be created but not exposed. Moreover, each AI MCP Proxy instance has been tagged as “mcp-tools”.

At the same time, we create a new and serviceless Kong Route with the AI MCP Proxy plugin also enabled. The only difference is that this instance is configured as “mode: listener” and aggregates all AI MCP Proxy instances defined with the tag “mcp-tools”.

The Code

With the aggregation in place, the code is even simpler:

The result should be the same:

You can also check the Konnect Analytics’ Explorer dashboard once again:

Conclusion

We presented a basic AI agent using Kong AI Gateway, Strands and Amazon Bedrock, including LLM models and MCP servers. It's totally feasible to implement advanced AI agents with query transformation, multiple data sources, multiple retrieval stages, RAG, etc. Moreover, Kong AI Gateway provides other plugins to enrich the relationship with the LLM providers and MCP servers, including Semantic Cache, Semantic Routing, Request and Response Transformation, etc.

Also, it's important to keep in mind that we can continue combining other API Gateway plugins to your AI-based use cases like using the OIDC plugin to secure your Foundation Models with AWS Cognito, using the Prometheus plugin to monitor your AI Gateway with Amazon Managed Prometheus and Grafana, and so on.

Finally, the architectural flexibility provided natively by Konnect and Kong AI Gateway allows us to deploy the Data Planes in a variety of platforms including AWS EC2 VMs, Amazon ECS, and Kong Dedicated Cloud Gateway, Kong's SaaS service for the Data Planes running in AWS.

You can discover all the features available on the Kong AI Gateway product page, or you can check the Kong and AWS landing page to learn more!

AI-powered API security? Yes please!

Learn MoreGet a Demo
MCPAI GatewayAgentic AIAWS

Table of Contents

  • Kong MCP Gateway introduction
  • Kong AI Gateway and Amazon Bedrock
  • Kong AI/MCP Gateway and Amazon Bedrock integration and APIs
  • Amazon Elastic Kubernetes Service (EKS) installation and preparation
  • Kubernetes Kong Operator and Konnect Control Plane/Data Plane installation
  • decK
  • Strands Agent SDK - AI agent and MCP fundamentals
  • Kong version
  • Strands Tools Decorator
  • Strands MCP client and Kong AI MCP Proxy plugin
  • Aggregate MCP tools
  • Conclusion

More on this topic

Videos

MCP vs OpenAPI vs A2A vs ?: Preparing for the Agentic World

Videos

Context‑Aware LLM Traffic Management with RAG and AI Gateway

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
MCPAI GatewayAgentic AIAWS
Share on Social
Claudio Acquaviva
Principal Architect, Kong
Jason Matis
Staff Solutions Engineer, Kong

Recommended posts

Move More Agentic Workloads to Production with AI Gateway 3.13

Kong Logo
Product ReleasesDecember 18, 2025

MCP ACLs, Claude Code Support, and New Guardrails New providers, smarter routing, stronger guardrails — because AI infrastructure should be as robust as APIs We know that successful AI connectivity programs often start with an intense focus on how

Greg Peranich

The AI Governance Wake-Up Call

Kong Logo
EnterpriseDecember 12, 2025

Companies are charging headfirst into AI, with research around agentic AI in the enterprise finding as many as 9 out of 10 organizations are actively working to adopt AI agents.  LLMs are being deployed, agentic workflows are getting created left

Taylor Hendricks

How to Build a Single LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 24, 2025

In my previous post, we discussed how we can implement a basic AI Agent with Kong AI Gateway. In part two of this series, we're going to review LangGraph fundamentals, rewrite the AI Agent and explore how Kong AI Gateway can be used to protect an LLM

Claudio Acquaviva

How to Strengthen a ReAct AI Agent with Kong AI Gateway

Kong Logo
EngineeringJuly 15, 2025

This is part one of a series exploring how Kong AI Gateway can be used in an AI Agent development with LangGraph. The series comprises three parts: Basic ReAct AI Agent with Kong AI Gateway Single LLM ReAct AI Agent with Kong AI Gateway and LangGr

Claudio Acquaviva

From Browser to Prompt: Building Infra for the Agentic Internet

Kong Logo
EnterpriseNovember 13, 2025

A close examination of what really powers the AI prompt unveils two technologies: the large language models (LLMs) that empower agents with intelligence and the ecosystem of MCP tools to deliver capabilities to the agents. While LLMs make your age

Amit Dey

Insights from eBay: How API Ecosystems Are Ushering In the Agentic Era

Kong Logo
EngineeringDecember 15, 2025

APIs have quietly powered the global shift to an interconnected economy. They’ve served as the data exchange highways behind the seamless experiences we now take for granted — booking a ride, paying a vendor, sending a message, syncing financial rec

Amit Dey

Kong AI/MCP Gateway and Kong MCP Server Technical Breakdown

Kong Logo
EngineeringDecember 11, 2025

In the latest Kong Gateway 3.12 release , announced October 2025, specific MCP capabilities have been released: AI MCP Proxy plugin: it works as a protocol bridge, translating between MCP and HTTP so that MCP-compatible clients can either call exi

Jason Matis

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026