WHY GARTNER’S “CONTEXT MESH” CHANGES EVERYTHING AI CONNECTIVITY: THE ROAD AHEAD DON’T MISS API + AI SUMMIT 2026 SEPT 30 – OCT 1
  • Why Kong
    • Explore the unified API Platform
        • BUILD APIs
        • Kong Insomnia
        • API Design
        • API Mocking
        • API Testing and Debugging
        • MCP Client
        • RUN APIs
        • API Gateway
        • Context Mesh
        • AI Gateway
        • Event Gateway
        • Kubernetes Operator
        • Service Mesh
        • Ingress Controller
        • Runtime Management
        • DISCOVER APIs
        • Developer Portal
        • Service Catalog
        • MCP Registry
        • GOVERN APIs
        • Metering and Billing
        • APIOps and Automation
        • API Observability
        • Why Kong?
      • CLOUD
      • Cloud API Gateways
      • Need a self-hosted or hybrid option?
      • COMPARE
      • Considering AI Gateway alternatives?
      • Kong vs. Postman
      • Kong vs. MuleSoft
      • Kong vs. Apigee
      • Kong vs. IBM
      • GET STARTED
      • Sign Up for Kong Konnect
      • Documentation
      • FOR PLATFORM TEAMS
      • Developer Platform
      • Kubernetes and Microservices
      • Observability
      • Service Mesh Connectivity
      • Kafka Event Streaming
      • FOR EXECUTIVES
      • AI Connectivity
      • Open Banking
      • Legacy Migration
      • Platform Cost Reduction
      • Kafka Cost Optimization
      • API Monetization
      • AI Monetization
      • AI FinOps
      • FOR AI TEAMS
      • AI Governance
      • AI Security
      • AI Cost Control
      • Agentic Infrastructure
      • MCP Production
      • MCP Traffic Gateway
      • FOR DEVELOPERS
      • Mobile App API Development
      • GenAI App Development
      • API Gateway for Istio
      • Decentralized Load Balancing
      • BY INDUSTRY
      • Financial Services
      • Healthcare
      • Higher Education
      • Insurance
      • Manufacturing
      • Retail
      • Software & Technology
      • Transportation
      • See all Solutions
  • Pricing
      • DOCUMENTATION
      • Kong Konnect
      • Kong Gateway
      • Kong Mesh
      • Kong AI Gateway
      • Kong Event Gateway
      • Kong Insomnia
      • Plugin Hub
      • EXPLORE
      • Blog
      • Learning Center
      • eBooks
      • Reports
      • Demos
      • Customer Stories
      • Videos
      • EVENTS
      • API + AI Summit
      • Webinars
      • User Calls
      • Workshops
      • Meetups
      • See All Events
      • FOR DEVELOPERS
      • Get Started
      • Community
      • Certification
      • Training
      • COMPANY
      • About Us
      • We're Hiring!
      • Press Room
      • Contact Us
      • Kong Partner Program
      • Enterprise Support Portal
      • Documentation
  • Login
  • Book Demo
  • Get Started
Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. LLM
‹Prev12

Build Your Own Internal RAG Agent with Kong AI Gateway

AI GatewayJuly 9, 2025

What Is RAG, and Why Should You Use It? RAG (Retrieval-Augmented Generation) is not a new concept in AI, and unsurprisingly, when talking to companies, everyone seems to have their own interpretation of how to implement it. So, let’s start with a refresher. RAG (short for Retrieval-Augmented Gener…

Antoine Jacquemin

Kong Konnect: Introducing HashiCorp Vault Support for LLMs

AIJune 26, 2025

If you're a builder, you likely keep sending your LLM credentials on every request from your agents and applications. But if you operate in an enterprise environment, you'll want to store your credentials in a secure third-party like HashiCorp Vault or IDP provider and have the infrastructure…

Marco Palladino

72% Say Enterprise GenAI Spending Going Up in 2025, Study Finds

AIJune 18, 2025

Enterprise adoption of large language models (LLMs) is surging. According to Gartner , more than 80% of enterprises will have deployed generative AI (GenAI) applications or used GenAI APIs by 2026, up from just 5% in 2023. That stark increase paints a telling picture: LLMs have evolved from a…

Eric Pulsifer

Kong AI Manager: Govern & Observe Agentic Traffic to Thousands of LLMs

AIMay 27, 2025

Today, we're excited to announce the general availability of AI Manager in Kong Konnect, the platform to manage all of your API, AI, and event connectivity across all modern digital applications and AI agents. Kong already provides the fastest and most feature rich AI Gateway technology to manage…

Marco Palladino

Consistently Hallucination-Proof Your LLMs with Automated RAG

AIApril 2, 2025

AI is quickly transforming the way businesses operate, turning what was once futuristic into everyday reality. However, we're still in the early innings of AI, and there are still several key limitations with AI that organizations should remain aware of to ensure that AI is being leveraged in a…

Adam Jiroun

PII Sanitization Needed for LLMs and Agentic AI is Now Easier to Build

AIApril 2, 2025

LLMs operate as highly capable, non-deterministic pattern matchers. But they come with two significant privacy challenges: If you pass raw user input, internal logs, or structured data directly into an LLM without safeguards, you’re risking the exposure of names, emails, credit cards, health info,…

Alex Drag

Announcing Kong AI Gateway 3.8 With Semantic Caching and Security, 6 New LLM Load-Balancing Algorithms, and More LLMs

AI GatewaySeptember 11, 2024

Today at API Summit , we're introducing one of the biggest new releases of our AI Gateway technology : a new class of intelligent semantic plugins, new advanced load balancing capabilities for LLMs, and the official support for AWS Bedrock and GCP Vertex, in addition to all the other supported LLM…

Marco Palladino

Introducing LLM Analytics in Kong Konnect for GenAI Traffic

API AnalyticsSeptember 11, 2024

We’re pleased to announce the new LLM Usage reporting feature in Advanced Analytics, which aims to help organizations better manage their large language model (LLM) usage. This feature offers insights into token consumption, costs, and latency, allowing businesses to optimize their AI investments.…

Christian Heidenreich

Training AI Models to Invoke APIs: The Gorilla Project Offers Next Evolution of Language Models

AIDecember 13, 2023

AI has been taking the world by storm. The innovative technology is responsible for revolutionizing the way users can synthesize information through Large Language Models (or LLMs) and interact with the world. However, the current state of AI isn't perfect and can be further optimized. This is…

Peter Barnard
‹Prev12

See AI Gateway in Action

Secure your AI infrastructure with prompt guards, PII sanitization, and centralized governance. Control LLM costs with token-based rate limiting and semantic routing across providers.

Get a Demo

See AI Gateway in Action

Secure your AI infrastructure with prompt guards, PII sanitization, and centralized governance. Control LLM costs with token-based rate limiting and semantic routing across providers.

Get a Demo

Increase developer productivity, security, and performance at scale with the unified platform for API management and AI.

  • Japanese
  • Frenchcoming soon
  • Spanishcoming soon
  • Germancoming soon
© Kong Inc. 2026
  • Terms
  • Privacy
  • Trust and Compliance