Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Product Releases
  4. Introducing LLM Analytics in Kong Konnect for GenAI Traffic
Product Releases
September 11, 2024
4 min read

Introducing LLM Analytics in Kong Konnect for GenAI Traffic

Christian Heidenreich
Senior Staff Product Manager, Kong
Topics
API AnalyticsAIAI GatewayAIOpsLLM
Share on Social

More on this topic

Videos

Metrics and Logs Are Out, Distributed Tracing Is In

Videos

Achieving APIOps with Konnect

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo

We’re pleased to announce the new LLM Usage reporting feature in Advanced Analytics, which aims to help organizations better manage their large language model (LLM) usage. This feature offers insights into token consumption, costs, and latency, allowing businesses to optimize their AI investments. By enabling comparisons across different LLM providers and models, organizations can make better-informed decisions and improve budget management.

advanced analytics dashboard

The increasing adoption of AI and its challenges

As businesses increasingly integrate AI into their operations, they encounter a variety of challenges. While AI offers substantial benefits, such as improved efficiency and enhanced customer experiences, the rapid adoption also brings complexities that organizations must navigate.

  • Visibility: Many organizations struggle to gain a clear understanding of how AI, particularly LLMs, is being utilized. This lack of visibility can lead to inefficiencies and missed opportunities for optimization.
  • Cost management: With the growing use of AI technologies in your organization, managing costs becomes critical. Organizations must monitor token usage and associated expenses to ensure they're maximizing their investment in AI while minimizing unnecessary spending.
  • Compliance and governance: Legal and compliance teams can ensure that LLM usage aligns with organizational policies and industry regulations by monitoring usage patterns and costs. 

Analyze AI traffic with Konnect Advanced Analytics

With the release of Kong API Gateway 3.8, organizations can now leverage Konnect’s Advanced Analytics to get insights into AI traffic. By providing a comprehensive view of their AI usage, this new feature empowers organizations to make data-driven decisions, optimize costs, and improve the overall performance of their AI-powered solutions.

Here are a few example use cases where you can use Advanced Analytics for AI traffic:

  • Using Advanced Analytics, organizations can compare cost and response times of various AI providers, enabling data-driven decisions on which provider to use.
  • By leveraging detailed records of AI requests, organizations can create comprehensive audit trails.
  • Using Advanced Analytics, organizations can track AI usage and costs per department, enabling fair allocation of AI resources and budgeting based on actual usage and value generated.
  • By comparing performance metrics (latency, token usage) across different versions of AI models, organizations can make informed decisions about rolling out updates or rolling back to previous versions if issues are detected.

Users can now toggle between API Usage and LLM Usage views in Explorer, allowing for tailored metrics, groupings, and filter options specific to AI traffic analysis. The LLM Usage feature introduces critical new metrics, including LLM Latency, Costs, Prompt Tokens, and Completion Tokens, while advanced filtering and grouping options now encompass Provider, Request Model, and Response Model, enabling granular analysis.

Additionally, API Requests now displays detailed LLM Insights for AI-related traffic that has been proxied through the Kong AI Gateway, providing a comprehensive audit trail with rich metadata.

How to get started

Head over to Kong Konnect and navigate to "Analytics" on your left nav bar. Go to "Explorer" and you'll see a new “From” selector at the top. Switch from "API Usage" to "LLM Usage."

This new switch helps you focus on one dataset at a time — either API or LLM usage. Since these datasets have different characteristics and metrics, separating them reduces information overload and makes analysis easier.

Once you select "LLM Usage," you'll see new LLM-specific metrics and dimensions, such as token usage and model types. This view is tailored to help you understand and optimize your LLM operations within the familiar Explorer interface.

Now let's explore token usage across different providers. This analysis gives you valuable insights into potential costs and helps identify which providers are more cost-effective. To do this:

  • Pick a suitable visualization, such as a bar chart.
  • Select “Total Token Count” as your metric.
  • Choose “Provider” as your group by dimension.

This view will clearly display total token usage per provider, allowing for quick comparisons. Keep in mind that higher token usage typically means higher costs, making this information crucial for optimizing your LLM expenses and provider selection.

To add another layer of analysis, include the Control Plane dimension. This will break down LLM usage by environment, team, or line of business. By doing so, you'll gain a comprehensive view of how different parts of your organization are utilizing LLMs. This analysis empowers you to make data-driven decisions about resource allocation and optimization across your entire ecosystem, potentially uncovering opportunities for cost savings or areas needing additional resources.

Start using Advanced Analytics in Kong Konnect Plus today and unlock the full potential of your AI traffic data, empowering your organization to make informed decisions, optimize costs, and enhance performance across all AI initiatives.

Manage LLM Costs & Observability

Developer agility meets compliance and security. Discover how Kong can help you become an API-first company.

Get a DemoStart for Free
Topics
API AnalyticsAIAI GatewayAIOpsLLM
Share on Social
Christian Heidenreich
Senior Staff Product Manager, Kong

Recommended posts

Kong Gateway Enterprise 3.8.x.x EOL

Kong Logo
Product ReleasesSeptember 23, 2025

As of September 2025, Kong Gateway Enterprise 3.8 will enter its End Of Life (EOL) phase and will no longer be fully supported by Kong. Following this, Kong Gateway Enterprise 3.8 will enter a 12-month sunset support period, focused on helping cus

Andrew Jessup

Kong Mesh 2.12: SPIFFE/SPIRE Support and Consistent XDS Resource Names

Kong Logo
Product ReleasesSeptember 18, 2025

We're very excited to announce Kong Mesh 2.12 to the world! Kong Mesh 2.12 delivers two very important features: SPIFFE / SPIRE support, which provides enterprise-class workload identity and trust models for your mesh, as well as a consistent Kuma R

Justin Davies

Unlocking API Analytics for Product Managers

Kong Logo
EngineeringSeptember 9, 2025

Meet Emily. She’s an API product manager at ACME, Inc., an ecommerce company that runs on dozens of APIs. One morning, her team lead asks a simple question: “Who’s our top API consumer, and which of your APIs are causing the most issues right now?”

Christian Heidenreich

You Might Be Doing API-First Wrong, New Analyst Research Suggests

Kong Logo
EnterpriseSeptember 3, 2025

Ever feel like you're fighting an uphill battle with your API strategy? You're building APIs faster than ever, but somehow everything feels harder. Wasn’t  API-first  supposed to make all this easier?  Well, you're not alone. And now industry analys

Heather Halenbeck

Announcing terraform-provider-konnect v3

Kong Logo
Product ReleasesAugust 22, 2025

It’s been almost a year since we released our  Konnect Terraform provider . In that time we’ve seen over 300,000 installs, have 1.7 times as many resources available, and have expanded the provider to include data sources to enable federated managem

Michael Heap

Announcing the Kong Agentic AI Hackathon

Kong Logo
NewsAugust 12, 2025

Kong-quer the Agentic AI Hackathon 🚀 Calling all builders, tinkerers, and API innovators. The Kong Hackathon is back for  API Summit 2025 ! This year, we’re challenging developers worldwide to create projects that don’t just react, they  think ,  a

Juhi Singh

How to Build a Multi-LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 31, 2025

In the last two parts of this series, we discussed How to Strengthen a ReAct AI Agent with Kong AI Gateway and How to Build a Single-LLM AI Agent with Kong AI Gateway and LangGraph . In this third and final part, we're going to evolve the AI Agen

Claudio Acquaviva

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance•
  • © Kong Inc. 2025