Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. How to Build a Multi-LLM AI Agent with Kong AI Gateway and LangGraph
Engineering
July 31, 2025
6 min read

How to Build a Multi-LLM AI Agent with Kong AI Gateway and LangGraph

Claudio Acquaviva
Principal Architect, Kong

In the last two parts of this series, we discussed How to Strengthen a ReAct AI Agent with Kong AI Gateway and How to Build a Single-LLM AI Agent with Kong AI Gateway and LangGraph. In this third and final part, we're going to evolve the AI Agent with multiple LLMs and Semantic Routing policies across them. In this blog post, we'll also explore new capabilities introduced in Kong AI Gateway 3.11 that support other GenAI infrastructures.

Multi-LLM ReAct AI Agent

In this section of the blog post, we're going to evolve the architecture one more time to add two new LLM infrastructures sitting behind the Gateway: Mistral and Anthropic, in addition to OpenAI.

Multi-LLM scenarios and use cases

In the main scenario, the Agent needs to communicate to multiple LLMs selectively, depending on its needs. Having the Kong AI Gateway intermediating the communication, provides several benefits:

  • Decide which LLM to use based on the cost, latency times, reliability, and mainly on semantics (some LLMs are better at a specific topic, others at coding, etc.).
  • Route queries to the appropriate LLM(s).
  • Act based on the results.
  • Fallback and redundancy: If one LLM fails or is slow, use another.

Semantic Routing Architecture

Kong AI Gateway offers a range of semantic capabilities including Caching and Prompt Guard. To implement the Multi-LLM Agent infrastructure, we're going to use the Semantic Routing capability provided by the AI Proxy Advanced plugin we've been using for the entire series of blog posts.

The AI Proxy Advanced Plugin has the ability to implement various load balancing policies, including distributing requests based on semantics or similarity between the prompts and description of each model. For example, consider that you have three models: the first one has been trained in sports, the second in music and the third one in science. What we want to do is route the requests accordingly, based on the topic each prompt has presented.

What happens is that, during configuration time, done by, for example, submitting decK declarations to Konnect Control Plane, the plugin hits the embeddings model for each description and stores the embeddings into the vector database.

Then, for each incoming request, the plugin submits a VSS (or Virtual Similarity Search) to the vector database to decide to which LLM the request should be routed to.

Semantic Routing configuration and request processing times

Redis

To implement the Semantic Routing architecture, we're going to use the Redis-stack Helm Charts to Redis as our vector database.

Ollama

As our Embedding model, we're going to consume the “mxbai-embed-large:latest” model handled locally by Ollama. Use the Ollama Helm Charts to install it.

Python Script

In this final AI Agent Python script, we have two main changes:

  • We have replaced the tools with new functions.
    • “get_music”: consumes the Event Registry service looking for music concerts.
    • “get_traffic”: it sends requests to Tavily service for traffic information.
    • “get_weather”: it remains the same, related to the OpenWeather public service.
  • Replaces the LangGraph calls to build the graph with another LangGraph pre-built function, “create_react_agent”.

The pre-built function “create_react_agent” is very helpful to implement the fundamental ReAct graph that we created programmatically before. That is, the agent is composed by:

  • A Node sending requests to the LLM
  • A “conditional_edge” associated with this Node and making decisions about how the Agent should proceed when getting a response from the LLM.
  • A Node to call tools

In fact, if you print the output of the graph with “graph.get_graph().draw_ascii())” function again, you'll see the same graph structure we'd in the previous version of the agent.

For this execution, the AI Proxy Advanced Plugin will route the request to Mistral, since it's related to music.

decK Declaration

Below you can check the new decK declaration for the Semantic Routing use case. The AI Proxy Advanced plugin has the following sections configured:

  • embeddings: where the plugin should go to generate embeddings related to the LLM models
  • vectordb: responsible for storing the embeddings and handling the VSS queries
  • targets: an entry for each LLM model. The most important setting is the description, which defines where the plugin should route the requests to.

Besides, the declaration applies the AI Prompt Decorator plugin so the Gateway asks the LLM to convert temperatures to Celsius.

Grafana Dashboards

Download and install the Grafana Dashboard available in the GitHub repository. It has two tiles:

  • Counter of requests for each Kong Route
  • Counter of requests for each LLM model

The dashboard is totally based on the metrics generated by the Prometheus plugin. The configuration is divided into two parts:

  • AI Proxy Advanced plugin with the following parameters

  • Prometheus plugin with the parameter

    Grafana Dashboard based on the metrics generated by the Prometheus plugin

LangGraph Server

Now that we have our final version of the AI Agent, it's time to build a LangGraph Server based on it. You have multiple deployment options to run your LangGraph Server but we're going to use our own Minikube cluster in a deployment called Standalone Container.

For details, you can refer to the links below:

  • LangGraph Cloud API Reference
  • Helm Chart for LangGraph Cloud

Agent Docker Image

The first step is to create the Docker image for the server. The code below removes the lines where we execute the graph. Another change is for the Kong Data Plane address, referring to the Kubernetes FQDN Service.

langgraph.json

The Docker image requires a “langgraph.json” file with the dependencies and the name of the graph variable inside the code, in our case “graph”.

Docker image creation

Create the image with the “langgraph” CLI command. It requires Docker installed in your environment.

or

Push it to Docker Hub:

Agent Deployment

Install your LangGraph Service using the Helm Chart available:

The “values.yaml” defines the service as “LoadBalancer” to make it available. Currently, only Postgres is supported as a database for LangGraph Server and Redis as the task queue. The file specifies Postgres resources for its Kubernetes deployment. Finally, LangGraph Server requires a LangSmith API Key. LangSmith is a platform used to monitor your server. Log to LangSmith and create your API Key.

Deploy the LangGraph Server:

If you want to uninstall it, run:

LangGraph Server API

If the LangGraph Server is deployed, you can use its API to send requests to your graph.

Look for your assistants with:

The expected response is:

Use the assistant's name to invoke graph.

The expected response is:

Kong AI Gateway 3.11 and Support for New GenAI Models

With Kong AI Gateway 3.11, we'll be able to support other GenAI infrastructures besides LLMs - which include video, images, etc. The following diagram lists the new modes supported:

Here's an example of a Kong Route declaration with the AI Proxy Advanced plugin enabled to protect the text-to-image OpenAI's Dall-E 2 model,

In order to do it, Kong AI Gateway 3.11 defines new configuration parameters like:

  • genai-category: is used to configure the GenAI infrastructure that the gateway protects. Besides image/generation, it supports, for example, text/generation and text/embeddings for regular LLMs and embedding models, audio/speech and audio/transcription for audio based models implementing speech recognition, audio-to-text, etc.
  • route_type: this existing parameter has been extended to support new types, such as:
    • LLM: llm/v1/responses, llm/v1/assistants, llm/v1/files and llm/v1/batches
    • Image: image/v1/images/generations, image/v1/images/edits
    • Audio: audio/v1/audio/speech, audio/v1/audio/transcriptions and audio/v1/audio/translations
    • Realtime: realtime/v1/realtime

Conclusion

This blog post has presented a basic AI Agent using Kong AI Gateway and LangGraph. Redis was used as a vector database and a local Ollama was the infrastructure that provided the Embedding Model.

Behind the Gateway, we've three LLM infrastructures (OpenAI, Mistral and Anthropic) and three external functions were used as tools by the AI Agent.

The Gateway was responsible for abstracting the LLM infrastructures and protecting the external functions with specific policies including Rate Limiting and API Keys.

You can discover all the features available on the Kong AI Gateway page.

Kong AI Gateway: Multi-LLM Adoption Simplified. AI-Native Gateway for governance & control.

Learn More
Kong GatewayAI GatewayLLM

More on this topic

Videos

API Cost Management in the Age of LLMs

Videos

Build an Agentic Enterprise with Kong AI Gateway

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
Kong GatewayAI GatewayLLM
Share on Social
Claudio Acquaviva
Principal Architect, Kong

Recommended posts

Kong AI/MCP Gateway and Kong MCP Server Technical Breakdown

Kong Logo
EngineeringDecember 11, 2025

In the latest Kong Gateway 3.12 release , announced October 2025, specific MCP capabilities have been released: AI MCP Proxy plugin: it works as a protocol bridge, translating between MCP and HTTP so that MCP-compatible clients can either call exi

Jason Matis

AI Voice Agents with Kong AI Gateway and Cerebras

Kong Logo
EngineeringNovember 24, 2025

Kong Gateway is an API gateway and a core component of the Kong Konnect platform . Built on a plugin-based extensibility model, it centralizes essential functions such as proxying, routing, load balancing, and health checking, efficiently manag

Claudio Acquaviva

From Chaos to Control: How Kong AI Gateway Streamlined My GenAI Application

Kong Logo
EngineeringOctober 6, 2025

🚧 The challenge: Scaling GenAI with governance While building a GenAI-powered agent for one of our company websites, I integrated components like LLM APIs, embedding models, and a RAG (Retrieval-Augmented Generation) pipeline. The application was d

Sachin Ghumbre

AI Guardrails: Ensure Safe, Responsible, Cost-Effective AI Integration

Kong Logo
EngineeringAugust 25, 2025

Why AI guardrails matter It's natural to consider the necessity of guardrails for your sophisticated AI implementations. The truth is, much like any powerful technology, AI requires a set of protective measures to ensure its reliability and integrit

Jason Matis

Move More Agentic Workloads to Production with AI Gateway 3.13

Kong Logo
Product ReleasesDecember 18, 2025

MCP ACLs, Claude Code Support, and New Guardrails New providers, smarter routing, stronger guardrails — because AI infrastructure should be as robust as APIs We know that successful AI connectivity programs often start with an intense focus on how

Greg Peranich

Securing Enterprise AI: OWASP Top 10 LLM Vulnerabilities Guide

Kong Logo
EngineeringJuly 31, 2025

Introduction to OWASP Top 10 for LLM Applications 2025 The OWASP Top 10 for LLM Applications 2025 represents a significant evolution in AI security guidance, reflecting the rapid maturation of enterprise AI deployments over the past year. The key up

Michael Field

Build Your Own Internal RAG Agent with Kong AI Gateway

Kong Logo
EngineeringJuly 9, 2025

What Is RAG, and Why Should You Use It? RAG (Retrieval-Augmented Generation) is not a new concept in AI, and unsurprisingly, when talking to companies, everyone seems to have their own interpretation of how to implement it. So, let’s start with a r

Antoine Jacquemin

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026