Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Product Releases
  4. Introducing the Volcano SDK to Build AI Agents in a Few Lines of Code 🌋
Product Releases
October 14, 2025
3 min read

Introducing the Volcano SDK to Build AI Agents in a Few Lines of Code 🌋

Marco Palladino
CTO and Co-Founder of Kong

Today, we're open-sourcing Volcano SDK, a TypeScript SDK for building AI agents that combines LLM reasoning with real-world actions through MCP tools.

Why Volcano SDK? One reason: because 9 lines of code are faster to write and easier to manage than 100+.

Without Volcano SDK? You'd need 100+ lines handling tool schemas, context management, provider switching, error handling, and HTTP clients. 

With Volcano SDK: 9 lines.

Look how we compress 100+ lines with the following example:

import { agent, llmOpenAI, llmAnthropic, mcp } from "volcano-ai";


// Setup: two LLMs, two MCP servers
const planner = llmOpenAI({ model: "gpt-5-mini", apiKey: process.env.OPENAI_API_KEY! });
const executor = llmAnthropic({ model: "claude-4.5-sonnet", apiKey: process.env.ANTHROPIC_API_KEY! });
const database = mcp("https://api.company.com/database/mcp");
const slack = mcp("https://api.company.com/slack/mcp");


// One workflow
await agent({ llm: planner })
 .then({
   prompt: "Analyze last week's sales data",
   mcps: [database]  // Auto-discovers and calls the right tools
 })
 .then({
   llm: executor,  // Switch to Claude
   prompt: "Write an executive summary"
 })
 .then({
   prompt: "Post the summary to #executives",
   mcps: [slack]
 })
 .run();

That's it. GPT-5 queries the database, Claude writes the summary, and it posts to Slack. Context flows automatically between steps. Tool schemas are discovered automatically. Errors are retried. All in 9 lines of actual workflow code.

✅ 4 lines of setup — Two LLMs, two MCP servers

✅ Multi-LLM workflow — GPT-5 analyzes, Claude writes

✅ Automatic tool selection — SDK picks the right tools

✅ Chainable steps — Read like a story

✅ Context flows automatically — Each step sees previous results

✅ Production-ready — Built-in retries, timeouts, telemetry

Start building AI agents at volcano.dev.

Why we created Volcano SDK

At Kong, we build a lot of AI agents. Each time, we ended up writing the same code: managing conversation history, handling tool schemas, switching providers, implementing retries, connecting to MCP servers.

We looked at existing options:

  • LangChain: Powerful, but heavy. Too many abstractions between us and what we wanted to do.
  • Provider SDKs (OpenAI, Anthropic): Great for single calls, but no help chaining steps or managing context across providers.
  • Other agent frameworks: Either too opinionated or missing basic features like streaming and proper error handling.

We just wanted to chain LLM calls with different providers and use MCP tools without writing infrastructure code every time. It should be simple:

  1. Call an LLM 
  2. Maybe use some tools 
  3. Call another LLM with different strengths 
  4. Chain it all together without losing context

So, we built Volcano SDK. And now you can start building with it.

What makes Volcano SDK different?

There are a few core concepts that you will find across Volcano that separate it from the pack:

  • MCP is a first-class citizen: Tools are as natural as prompts. Pass an array of MCP servers, and the LLM automatically discovers and calls the right tools.
  • Volcano SDK is multi-provider by design: Use GPT-5 for planning, Claude for writing, Llama for classification — all in the same workflow. Context flows automatically between them.
  • Volcano prioritizes chainable simplicity: Our code reads like your intent. No state machines. No middleware hell. Just .then() your way to production. 
  • It’s production-ready out of the box: OpenTelemetry tracing, automatic retries, configurable timeouts, connection pooling, OAuth support — the boring stuff you need, built in.

We wanted to write agents like we write promises: one elegant chain that does exactly what it says, with the right tool for each job:

wait agent({ llm: planner })
 .then({ prompt: "Analyze data", mcps: [db] })
 .then({ llm: executor, prompt: "Write summary" })
 .then({ prompt: "Post to Slack", mcps: [slack] })
 .run();

Volcano is focused on one thing: multi-step workflows that mix different LLMs and MCP tools. It's TypeScript-first, type-safe, and small enough to understand in an afternoon, and it comes packed with easy-to-use features and multi-LLM support.

Get started building AI agents with Volcano SDK

It’s time to start delivering on the agentic promise. It’s time to start building powerful AI agents with the Volcano SDK at volcano.dev. We can’t wait to see what you build.

Agentic AILLMAIOpen Source

More on this topic

Reports

Agentic AI in the Enterprise: Adoption, Governance, and Barriers

Demos

Securing Enterprise LLM Deployments: Best Practices and Implementation

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
Agentic AILLMAIOpen Source
Share on Social
Marco Palladino
CTO and Co-Founder of Kong

Recommended posts

Move More Agentic Workloads to Production with AI Gateway 3.13

Kong Logo
Product ReleasesDecember 18, 2025

MCP ACLs, Claude Code Support, and New Guardrails New providers, smarter routing, stronger guardrails — because AI infrastructure should be as robust as APIs We know that successful AI connectivity programs often start with an intense focus on how

Greg Peranich

Make MCP Production-Ready: Introducing Kong’s Enterprise MCP Gateway

Kong Logo
Product ReleasesOctober 14, 2025

What does the solution space look like so far? The solution landscape is complicated by the fact that MCP is still finding its footing, and there are many various OSS projects and vendors that are rapidly shipping “MCP support” in an attempt to take

Alex Drag

The AI Governance Wake-Up Call

Kong Logo
EnterpriseDecember 12, 2025

Companies are charging headfirst into AI, with research around agentic AI in the enterprise finding as many as 9 out of 10 organizations are actively working to adopt AI agents.  LLMs are being deployed, agentic workflows are getting created left

Taylor Hendricks

AI Voice Agents with Kong AI Gateway and Cerebras

Kong Logo
EngineeringNovember 24, 2025

Kong Gateway is an API gateway and a core component of the Kong Konnect platform . Built on a plugin-based extensibility model, it centralizes essential functions such as proxying, routing, load balancing, and health checking, efficiently manag

Claudio Acquaviva

Kong Konnect: Introducing HashiCorp Vault Support for LLMs

Kong Logo
Product ReleasesJune 26, 2025

If you're a builder, you likely keep sending your LLM credentials on every request from your agents and applications. But if you operate in an enterprise environment, you'll want to store your credentials in a secure third-party like HashiCorp Vault

Marco Palladino

Kong AI Manager: Govern & Observe Agentic Traffic to Thousands of LLMs

Kong Logo
Product ReleasesMay 27, 2025

Today, we're excited to announce the general availability of AI Manager in Kong Konnect, the platform to manage all of your API, AI, and event connectivity across all modern digital applications and AI agents. Kong already provides the fastest and m

Marco Palladino

Announcing Kong AI Gateway 3.8 With Semantic Caching and Security, 6 New LLM Load-Balancing Algorithms, and More LLMs

Kong Logo
Product ReleasesSeptember 11, 2024

Today at API Summit , we're introducing one of the biggest new releases of our AI Gateway technology : a new class of intelligent semantic plugins, new advanced load balancing capabilities for LLMs, and the official support for AWS Bedrock and GCP

Marco Palladino

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026