Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Product Releases
  4. Introducing the Volcano SDK to Build AI Agents in a Few Lines of Code 🌋
Product Releases
October 14, 2025
3 min read

Introducing the Volcano SDK to Build AI Agents in a Few Lines of Code 🌋

Marco Palladino
CTO and Co-Founder of Kong
Topics
Agentic AILLMAIOpen Source
Share on Social

More on this topic

Videos

Why Developers Should Manage Full Application Lifecycle

Videos

Unleashing Innovation by Leveraging Open Source and Automation

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo

Today, we're open-sourcing Volcano SDK, a TypeScript SDK for building AI agents that combines LLM reasoning with real-world actions through MCP tools.

Why Volcano SDK? One reason: because 9 lines of code are faster to write and easier to manage than 100+.

Without Volcano SDK? You'd need 100+ lines handling tool schemas, context management, provider switching, error handling, and HTTP clients. 

With Volcano SDK: 9 lines.

Look how we compress 100+ lines with the following example:

import { agent, llmOpenAI, llmAnthropic, mcp } from "volcano-ai";


// Setup: two LLMs, two MCP servers
const planner = llmOpenAI({ model: "gpt-5-mini", apiKey: process.env.OPENAI_API_KEY! });
const executor = llmAnthropic({ model: "claude-4.5-sonnet", apiKey: process.env.ANTHROPIC_API_KEY! });
const database = mcp("https://api.company.com/database/mcp");
const slack = mcp("https://api.company.com/slack/mcp");


// One workflow
await agent({ llm: planner })
 .then({
   prompt: "Analyze last week's sales data",
   mcps: [database]  // Auto-discovers and calls the right tools
 })
 .then({
   llm: executor,  // Switch to Claude
   prompt: "Write an executive summary"
 })
 .then({
   prompt: "Post the summary to #executives",
   mcps: [slack]
 })
 .run();

That's it. GPT-5 queries the database, Claude writes the summary, and it posts to Slack. Context flows automatically between steps. Tool schemas are discovered automatically. Errors are retried. All in 9 lines of actual workflow code.

✅ 4 lines of setup — Two LLMs, two MCP servers

✅ Multi-LLM workflow — GPT-5 analyzes, Claude writes

✅ Automatic tool selection — SDK picks the right tools

✅ Chainable steps — Read like a story

✅ Context flows automatically — Each step sees previous results

✅ Production-ready — Built-in retries, timeouts, telemetry

Start building AI agents at volcano.dev.

Why we created Volcano SDK

At Kong, we build a lot of AI agents. Each time, we ended up writing the same code: managing conversation history, handling tool schemas, switching providers, implementing retries, connecting to MCP servers.

We looked at existing options:

  • LangChain: Powerful, but heavy. Too many abstractions between us and what we wanted to do.
  • Provider SDKs (OpenAI, Anthropic): Great for single calls, but no help chaining steps or managing context across providers.
  • Other agent frameworks: Either too opinionated or missing basic features like streaming and proper error handling.

We just wanted to chain LLM calls with different providers and use MCP tools without writing infrastructure code every time. It should be simple:

  1. Call an LLM 
  2. Maybe use some tools 
  3. Call another LLM with different strengths 
  4. Chain it all together without losing context

So, we built Volcano SDK. And now you can start building with it.

What makes Volcano SDK different?

There are a few core concepts that you will find across Volcano that separate it from the pack:

  • MCP is a first-class citizen: Tools are as natural as prompts. Pass an array of MCP servers, and the LLM automatically discovers and calls the right tools.
  • Volcano SDK is multi-provider by design: Use GPT-5 for planning, Claude for writing, Llama for classification — all in the same workflow. Context flows automatically between them.
  • Volcano prioritizes chainable simplicity: Our code reads like your intent. No state machines. No middleware hell. Just .then() your way to production. 
  • It’s production-ready out of the box: OpenTelemetry tracing, automatic retries, configurable timeouts, connection pooling, OAuth support — the boring stuff you need, built in.

We wanted to write agents like we write promises: one elegant chain that does exactly what it says, with the right tool for each job:

wait agent({ llm: planner })
 .then({ prompt: "Analyze data", mcps: [db] })
 .then({ llm: executor, prompt: "Write summary" })
 .then({ prompt: "Post to Slack", mcps: [slack] })
 .run();

Volcano is focused on one thing: multi-step workflows that mix different LLMs and MCP tools. It's TypeScript-first, type-safe, and small enough to understand in an afternoon, and it comes packed with easy-to-use features and multi-LLM support.

Get started building AI agents with Volcano SDK

It’s time to start delivering on the agentic promise. It’s time to start building powerful AI agents with the Volcano SDK at volcano.dev. We can’t wait to see what you build.

Topics
Agentic AILLMAIOpen Source
Share on Social
Marco Palladino
CTO and Co-Founder of Kong

Recommended posts

Kong Gateway Enterprise 3.8.x.x EOL

Kong Logo
Product ReleasesSeptember 23, 2025

As of September 2025, Kong Gateway Enterprise 3.8 will enter its End Of Life (EOL) phase and will no longer be fully supported by Kong. Following this, Kong Gateway Enterprise 3.8 will enter a 12-month sunset support period, focused on helping cus

Andrew Jessup

Kong Mesh 2.12: SPIFFE/SPIRE Support and Consistent XDS Resource Names

Kong Logo
Product ReleasesSeptember 18, 2025

We're very excited to announce Kong Mesh 2.12 to the world! Kong Mesh 2.12 delivers two very important features: SPIFFE / SPIRE support, which provides enterprise-class workload identity and trust models for your mesh, as well as a consistent Kuma R

Justin Davies

You Might Be Doing API-First Wrong, New Analyst Research Suggests

Kong Logo
EnterpriseSeptember 3, 2025

Ever feel like you're fighting an uphill battle with your API strategy? You're building APIs faster than ever, but somehow everything feels harder. Wasn’t  API-first  supposed to make all this easier?  Well, you're not alone. And now industry analys

Heather Halenbeck

Announcing terraform-provider-konnect v3

Kong Logo
Product ReleasesAugust 22, 2025

It’s been almost a year since we released our  Konnect Terraform provider . In that time we’ve seen over 300,000 installs, have 1.7 times as many resources available, and have expanded the provider to include data sources to enable federated managem

Michael Heap

Announcing the Kong Agentic AI Hackathon

Kong Logo
NewsAugust 12, 2025

Kong-quer the Agentic AI Hackathon 🚀 Calling all builders, tinkerers, and API innovators. The Kong Hackathon is back for  API Summit 2025 ! This year, we’re challenging developers worldwide to create projects that don’t just react, they  think ,  a

Juhi Singh

How to Build a Multi-LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 31, 2025

In the last two parts of this series, we discussed How to Strengthen a ReAct AI Agent with Kong AI Gateway and How to Build a Single-LLM AI Agent with Kong AI Gateway and LangGraph . In this third and final part, we're going to evolve the AI Agen

Claudio Acquaviva

How to Build a Single LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 24, 2025

In my previous post, we discussed how we can implement a basic AI Agent with Kong AI Gateway. In part two of this series, we're going to review LangGraph fundamentals, rewrite the AI Agent and explore how Kong AI Gateway can be used to protect an LLM

Claudio Acquaviva

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance•
  • © Kong Inc. 2025