Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Tracing With Zipkin in Kong 2.1.0
Engineering
August 7, 2020
4 min read

Tracing With Zipkin in Kong 2.1.0

Enrique García Cota
Topics
API GatewayAPI Design
Share on Social

More on this topic

eBooks

API Infrastructure: ESB versus API Gateway

eBooks

5 Questions To Ask Your API Gateway Vendor

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo

Why Tracing?

There is a great number of logging plugins for Kong, which might be enough for your needs. However, they have certain limitations:

  • Most of them only work on HTTP/HTTPS traffic.

  • They make sense in an API gateway scenario, with a single Kong cluster proxying traffic between consumers and services. Each log line will generally correspond to a request which is "independent" from the rest. However, in more complex scenarios like a service mesh system where a single request can span multiple, internal requests and responses between several entities before the consumer gets an answer, log lines are not "independent" anymore.

  • The format in which log lines are saved are useful for security and compliance as well as to debug certain kinds of problems, but it may not be useful for others. This is because logging plugins tend to store a single log line per request, as unstructured text. Extracting and filtering information from them — for debugging performance problems, for example — can be quite laborious.

Tracing With Zipkin

Zipkin is a tracing server, specialized in gathering timing data. Used in conjunction with Kong's Zipkin plugin, it can address the points above:

  • It is compatible with both HTTP/HTTPS and stream traffic.

  • Traffic originating from each request is grouped under the "trace" abstraction, even when it travels through several Kong instances.

  • Each request produces a series of one or more "spans." Spans are very useful for debugging timing-related problems, especially when the ones belonging to the same trace are presented grouped together.

  • Information is thus gathered in a more structured way than with other logging plugins, requiring less parsing.

  • The trace abstraction can also be used to debug problems that occur in multi-service environments when requests "hop" over several elements of the infrastructure; they can easily be grouped under the same trace.

Setup

For a basic setup, you only need Docker and Docker-compose. Here's the docker-compose.yml file that we will use:

This file declares four docker images: a Zipkin server, a simple HTTP Echo service, a TCP Echo service and Kong itself. Given that the Kong instance that we are creating is db-less, it needs a declarative configuration file. It should be called kong.yml and be in the same folder as docker-compose.yml:

We should be able to start the four images with:

Leave the console running and open a new one.

Infrastructure Testing

Let's test that all services are up and running.

First, open http://localhost:9411/zipkin/ with a browser. You should see the Zipkin web UI, proving that Zipkin is up and running.

The following steps are done with httpie:

That's a call to Kong's Admin API (port 8000) using httpie. You should see a single service called echo_service. It indicates that Kong is up.

That's a straight GET request to the echo server (port 8080) using httpie. It should return HTTP/1.1 200 OK, indicating that the echo server is answering requests.

That's a straight TCP connection to the TCP echo server (port 9000). It should answer "hello world." You might need to close the connection with CTRL-C.

Tracing HTTP Traffic Through Kong

Let's start tracing with this command:

That's a POST request to the HTTP echo server proxied through Kong's proxy port (8000). The received answer should be very similar to the previous one, but with added response headers like Via: kong/2.0.1 . You should also see some of the headers that the Zipkin plugin added to the request, like x-b3-traceid.

Let's generate some more traffic:

Now go back to the Zipkin UI interface in your browser (http://localhost:9411/zipkin/) and click on the magnifying glass icon (🔎 – You might need to click on Try new Lens UI first). You should be able to see each request as a series of spans.

The Zipkin UI should display one trace per HTTP request. You may click on each individual trace to see more details, including information about how long each of the Kong phases took to execute:

Preservation of B3 Tracing Headers

This request is similar to the ones made before, but it includes an HTTP header called x-b3-traceid. This kind of header is used when a request is treated by several microservices and should remain unaltered when a request is passed around from one service to the other.

In this case, we can check that Kong is respecting this convention by checking that the headers returned in the response. One of them should be:

You should be able to visually see the trace in the Zipkin UI by following this link:

http://localhost:9411/zipkin/traces/12345678901234567890123456789012

Our Zipkin plugin currently supports several of these tracing headers, and more are planned for future releases!

Tracing TCP Traffic Through Kong

This will open a TCP connection to our TCP Echo server with Netcat. Write names and press enter to generate tcp traffic:

Press Ctrl-C to finish and close the connection — that is a requirement for the next step.

This should have created some traces on Zipkin — this time for TCP traffic. Open the Zipkin UI in your browser (http://localhost:9411/zipkin/) and click on the magnifying glass icon (🔎). You should see new traces with spans. Note that the number of traces you see might be different from the ones shown in the following screenshot:

As before, you should be able to see each individual trace in more detail by clicking on them:

TCP traffic, by its own nature, is not compatible with tracing headers.

Conclusion

This was just an overview of some of the features available via the Kong Zipkin Plugin when Kong is being used as an API gateway. The tracing headers feature, in particular, can help shed some light on problems occurring on multi-service setups. Stay tuned for other articles about this!

Topics
API GatewayAPI Design
Share on Social
Enrique García Cota

Recommended posts

Unlocking API Analytics for Product Managers

Kong Logo
EngineeringSeptember 9, 2025

Meet Emily. She’s an API product manager at ACME, Inc., an ecommerce company that runs on dozens of APIs. One morning, her team lead asks a simple question: “Who’s our top API consumer, and which of your APIs are causing the most issues right now?”

Christian Heidenreich

How to Build a Multi-LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 31, 2025

In the last two parts of this series, we discussed How to Strengthen a ReAct AI Agent with Kong AI Gateway and How to Build a Single-LLM AI Agent with Kong AI Gateway and LangGraph . In this third and final part, we're going to evolve the AI Agen

Claudio Acquaviva

How to Build a Single LLM AI Agent with Kong AI Gateway and LangGraph

Kong Logo
EngineeringJuly 24, 2025

In my previous post, we discussed how we can implement a basic AI Agent with Kong AI Gateway. In part two of this series, we're going to review LangGraph fundamentals, rewrite the AI Agent and explore how Kong AI Gateway can be used to protect an LLM

Claudio Acquaviva

How to Strengthen a ReAct AI Agent with Kong AI Gateway

Kong Logo
EngineeringJuly 15, 2025

This is part one of a series exploring how Kong AI Gateway can be used in an AI Agent development with LangGraph. The series comprises three parts: Basic ReAct AI Agent with Kong AI Gateway Single LLM ReAct AI Agent with Kong AI Gateway and LangGr

Claudio Acquaviva

Build Your Own Internal RAG Agent with Kong AI Gateway

Kong Logo
EngineeringJuly 9, 2025

What Is RAG, and Why Should You Use It? RAG (Retrieval-Augmented Generation) is not a new concept in AI, and unsurprisingly, when talking to companies, everyone seems to have their own interpretation of how to implement it. So, let’s start with a r

Antoine Jacquemin

AI Gateway Benchmark: Kong AI Gateway, Portkey, and LiteLLM

Kong Logo
EngineeringJuly 7, 2025

In February 2024, Kong became the first API platform to launch a dedicated AI gateway, designed to bring production-grade performance, observability, and policy enforcement to GenAI workloads. At its core, Kong’s AI Gateway provides a universal API

Claudio Acquaviva

Scalable Architectures with Vue Micro Frontends: A Developer-Centric Approach

Kong Logo
EngineeringJanuary 9, 2024

In this article, which is based on my talk at VueConf Toronto 2023, we'll explore how to harness the power of Vue.js and micro frontends to create scalable, modular architectures that prioritize the developer experience. We'll unveil practical strate

Adam DeHaven

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance•
  • © Kong Inc. 2025