• Explore the unified API Platform
        • BUILD APIs
        • Kong Insomnia
        • API Design
        • API Mocking
        • API Testing & Debugging
        • MCP Client
        • RUN APIs
        • API Gateway
        • Context Mesh
        • AI Gateway
        • Event Gateway
        • Kubernetes Operator
        • Service Mesh
        • Ingress Controller
        • Runtime Management
        • DISCOVER APIs
        • Developer Portal
        • Service Catalog
        • MCP Registry
        • GOVERN APIs
        • Metering & Billing
        • APIOps & Automation
        • API Observability
        • Why Kong?
      • CLOUD
      • Cloud API Gateways
      • Need a self-hosted or hybrid option?
      • COMPARE
      • Considering AI Gateway alternatives?
      • Kong vs. Postman
      • Kong vs. MuleSoft
      • Kong vs. Apigee
      • Kong vs. IBM
      • GET STARTED
      • Sign Up for Kong Konnect
      • Documentation
  • Agents
      • FOR PLATFORM TEAMS
      • Developer Platform
      • Kubernetes & Microservices
      • Observability
      • Service Mesh Connectivity
      • Kafka Event Streaming
      • FOR EXECUTIVES
      • AI Connectivity
      • Open Banking
      • Legacy Migration
      • Platform Cost Reduction
      • Kafka Cost Optimization
      • API Monetization
      • AI Monetization
      • AI FinOps
      • FOR AI TEAMS
      • AI Cost Control
      • AI Governance
      • AI Integration
      • AI Security
      • Agentic Infrastructure
      • MCP Production
      • MCP Traffic Gateway
      • FOR DEVELOPERS
      • Mobile App API Development
      • GenAI App Development
      • API Gateway for Istio
      • Decentralized Load Balancing
      • BY INDUSTRY
      • Financial Services
      • Healthcare
      • Higher Education
      • Insurance
      • Manufacturing
      • Retail
      • Software & Technology
      • Transportation
      • See all Solutions
      • DOCUMENTATION
      • Kong Konnect
      • Kong Gateway
      • Kong Mesh
      • Kong AI Gateway
      • Kong Insomnia
      • Plugin Hub
      • EXPLORE
      • Blog
      • Learning Center
      • eBooks
      • Reports
      • Demos
      • Customer Stories
      • Videos
      • EVENTS
      • AI + API Summit
      • Webinars
      • User Calls
      • Workshops
      • Meetups
      • See All Events
      • FOR DEVELOPERS
      • Get Started
      • Community
      • Certification
      • Training
      • COMPANY
      • About Us
      • Why Kong?
      • We're Hiring!
      • Press Room
      • Investors
      • Contact Us
      • PARTNER
      • Kong Partner Program
      • SECURITY
      • Trust and Compliance
      • SUPPORT
      • Enterprise Support Portal
      • Professional Services
      • Documentation
      • Press Releases

        Kong Names Bruce Felt as Chief Financial Officer

        Read More
  • Pricing
  • Login
  • Get a Demo
  • Start for Free
Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Tracing With Zipkin in Kong 2.1.0
Engineering
August 7, 2020
4 min read

Tracing With Zipkin in Kong 2.1.0

Enrique García Cota

Why Tracing?

There is a great number of logging plugins for Kong, which might be enough for your needs. However, they have certain limitations:

  • Most of them only work on HTTP/HTTPS traffic.

  • They make sense in an API gateway scenario, with a single Kong cluster proxying traffic between consumers and services. Each log line will generally correspond to a request which is "independent" from the rest. However, in more complex scenarios like a service mesh system where a single request can span multiple, internal requests and responses between several entities before the consumer gets an answer, log lines are not "independent" anymore.

  • The format in which log lines are saved are useful for security and compliance as well as to debug certain kinds of problems, but it may not be useful for others. This is because logging plugins tend to store a single log line per request, as unstructured text. Extracting and filtering information from them — for debugging performance problems, for example — can be quite laborious.

Tracing With Zipkin

Zipkin is a tracing server, specialized in gathering timing data. Used in conjunction with Kong's Zipkin plugin, it can address the points above:

  • It is compatible with both HTTP/HTTPS and stream traffic.

  • Traffic originating from each request is grouped under the "trace" abstraction, even when it travels through several Kong instances.

  • Each request produces a series of one or more "spans." Spans are very useful for debugging timing-related problems, especially when the ones belonging to the same trace are presented grouped together.

  • Information is thus gathered in a more structured way than with other logging plugins, requiring less parsing.

  • The trace abstraction can also be used to debug problems that occur in multi-service environments when requests "hop" over several elements of the infrastructure; they can easily be grouped under the same trace.

Setup

For a basic setup, you only need Docker and Docker-compose. Here's the docker-compose.yml file that we will use:

  version: '3'
  
  networks:
    kong-net:
  
  services:
    # Kubernetes echo service, it just returns the requests it receives as responses
    # This service will act as our upstream
    http-echo:
      image: gcr.io/kubernetes-e2e-test-images/echoserver:2.2
      networks:
        - kong-net
      ports:
        - "8080:8080"
  
    tcp-echo:
      image: istio/tcp-echo-server:1.1
      networks:
        - kong-net
      ports:
        - "9000:9000"
  
    # Our zipkin server
    zipkin:
      image: openzipkin/zipkin:2
      networks:
        - kong-net
      ports:
        - "9411:9411"
      restart: on-failure
  
    # Our Kong API gateway
    # It expects a conf file in ./kong.yml for declarative configuration
    kong:
      image: kong/kong:2.1.0
      networks:
        - kong-net
      environment:
        KONG_LOG_LEVEL: debug
        KONG_ADMIN_ACCESS_LOG: /dev/stdout
        KONG_ADMIN_ERROR_LOG: /dev/stderr
        KONG_ADMIN_LISTEN: "0.0.0.0:8001"
        KONG_STREAM_LISTEN: "0.0.0.0:8500"
        KONG_DATABASE: "off"
        KONG_DECLARATIVE_CONFIG: /usr/local/kong/pwd/kong.yml
        KONG_PROXY_ACCESS_LOG: /dev/stdout
        KONG_PROXY_ERROR_LOG: /dev/stderr
      volumes:
        - "$PWD:/usr/local/kong/pwd"
      ports:
        - "8000:8000/tcp"
        - "8001:8001/tcp"
        - "8500:8500/tcp"
        - "8443:8443/tcp"
        - "8444:8444/tcp"
      depends_on:
        - http-echo
        - tcp-echo
        - zipkin
      healthcheck:
        test: ["CMD", "kong", "health"]
        interval: 10s
        timeout: 10s
        retries: 10

This file declares four docker images: a Zipkin server, a simple HTTP Echo service, a TCP Echo service and Kong itself. Given that the Kong instance that we are creating is db-less, it needs a declarative configuration file. It should be called kong.yml and be in the same folder as docker-compose.yml:

# kong.yml

_format_version: "1.1"

# The names `http-echo`, `tcp-echo` and `zipkin`
# used in urls are served by docker-compose itself
# in order to avoid hardcoding of IP addresses.
services:
  - name: http-echo-srv
    url: http://http-echo:8080
    routes:
      - hosts: [ "echo.dev" ]
    plugins:
      - name: zipkin
        config:
          http_endpoint: http://zipkin:9411/api/v2/spans
          sample_ratio: 1 # for testing purposes.

  - name: tcp-echo-srv
    url: tcp://tcp-echo:9000
    routes:
      - protocols: [ "tcp" ]
        destinations: [{ port: 8500 }]
    plugins:
      - name: zipkin
        protocols: [ "tcp" ]
        config:
          http_endpoint: http://zipkin:9411/api/v2/spans
          sample_ratio: 1 # for testing purposes.

We should be able to start the four images with:

docker-compose up --force-recreate

Leave the console running and open a new one.

Infrastructure Testing

Let's test that all services are up and running.

First, open http://localhost:9411/zipkin/ with a browser. You should see the Zipkin web UI, proving that Zipkin is up and running.

The following steps are done with httpie:

http :8001/services

That's a call to Kong's Admin API (port 8000) using httpie. You should see a single service called echo_service. It indicates that Kong is up.

http :8080

That's a straight GET request to the echo server (port 8080) using httpie. It should return HTTP/1.1 200 OK, indicating that the echo server is answering requests.

echo world | nc 127.0.0.1 9000

That's a straight TCP connection to the TCP echo server (port 9000). It should answer "hello world." You might need to close the connection with CTRL-C.

Tracing HTTP Traffic Through Kong

Let's start tracing with this command:

http :8000 host:echo.dev foo=bar

That's a POST request to the HTTP echo server proxied through Kong's proxy port (8000). The received answer should be very similar to the previous one, but with added response headers like Via: kong/2.0.1 . You should also see some of the headers that the Zipkin plugin added to the request, like x-b3-traceid.

Let's generate some more traffic:

http :8000 host:echo.dev bar=baz
http :8000 host:echo.dev bar=qux

Now go back to the Zipkin UI interface in your browser (http://localhost:9411/zipkin/) and click on the magnifying glass icon (🔎 – You might need to click on Try new Lens UI first). You should be able to see each request as a series of spans.

The Zipkin UI should display one trace per HTTP request. You may click on each individual trace to see more details, including information about how long each of the Kong phases took to execute:

Preservation of B3 Tracing Headers

http :8000 host:echo.dev x-b3-traceid:12345678901234567890123456789012

This request is similar to the ones made before, but it includes an HTTP header called x-b3-traceid. This kind of header is used when a request is treated by several microservices and should remain unaltered when a request is passed around from one service to the other.

In this case, we can check that Kong is respecting this convention by checking that the headers returned in the response. One of them should be:

Request Headers:
...
    x-b3-traceid=12345678901234567890123456789012

You should be able to visually see the trace in the Zipkin UI by following this link:

http://localhost:9411/zipkin/traces/12345678901234567890123456789012

Our Zipkin plugin currently supports several of these tracing headers, and more are planned for future releases!

Tracing TCP Traffic Through Kong

nc 127.0.0.1 8500

This will open a TCP connection to our TCP Echo server with Netcat. Write names and press enter to generate tcp traffic:

peter
hello peter
jannet
hello jannet
frank
hello frank

Press Ctrl-C to finish and close the connection — that is a requirement for the next step.

This should have created some traces on Zipkin — this time for TCP traffic. Open the Zipkin UI in your browser (http://localhost:9411/zipkin/) and click on the magnifying glass icon (🔎). You should see new traces with spans. Note that the number of traces you see might be different from the ones shown in the following screenshot:

As before, you should be able to see each individual trace in more detail by clicking on them:

TCP traffic, by its own nature, is not compatible with tracing headers.

Conclusion

This was just an overview of some of the features available via the Kong Zipkin Plugin when Kong is being used as an API gateway. The tracing headers feature, in particular, can help shed some light on problems occurring on multi-service setups. Stay tuned for other articles about this!

API GatewayAPI Design

More on this topic

Videos

Tackling Cross-Cutting Concerns at the Front Door

Videos

Unifying REST and Event APIs for Partners

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
API GatewayAPI Design
Enrique García Cota

Recommended posts

Configuring Kong Dedicated Cloud Gateways with Managed Redis in a Multi-Cloud Environment

EngineeringMarch 12, 2026

Architecture Overview A multicloud DCGW architecture typically contains three main layers. 1\. Konnect Control Plane The SaaS control plane manages configuration, plugins, and policies. All gateways connect securely to this layer. 2\. Dedicated C

Hugo Guerrero

Building Secure AI Agents with Kong's MCP Proxy and Volcano SDK

EngineeringJanuary 27, 2026

The example below shows how an AI agent can be built using Volcano SDK with minimal code, while still interacting with backend services in a controlled way. The agent is created by first configuring an LLM, then defining an MCP (Model Context Prot

Eugene Tan

Multi-Tenancy and Kong: An Architectural Guide

EnterpriseMarch 2, 2023

Engineering organizations building modern API-driven systems have different priorities when it comes to their API management solution. These priorities will drive design decisions about the deployment of various components for API gateways. Some org

Rick Spurgeon

RESTful API Best Practices

Kong Logo
Learning CenterMarch 10, 2022

If youre involved in API design these days it can feel like someone is proclaiming the benefits of RESTful API design everywhere you turn. However, that advice often comes without an explanation of exactly what is meant by RESTful APIs or is mixed i

Kong

The Life of an API Gateway Request (Part 1)

EnterpriseOctober 7, 2021

The inner workings of an API gateway request can be difficult to understand because of its scale. To provide some orientation, we will use the real world as a reference, from planet-spanning infrastructure to a person eating a chocolate bar (process

Enrique García Cota

Kong Simplifies Multicloud Cloud Gateways with Managed Redis Cache

Product ReleasesMarch 12, 2026

Managed Redis cache is a turnkey "Shared State" add-on for Kong Dedicated Cloud Gateways. It is designed to combine the performance of an in-memory data store with the simplicity of a SaaS product. When you spin up a Dedicated Cloud Gateway in Kong

Amit Shah

AI Input vs. Output: Why Token Direction Matters for AI Cost Management

EnterpriseMarch 10, 2026

The Shifting Economic Landscape: The AI token economy in 2026 is evolving, and enterprise leaders must distinguish between low-cost input tokens and high-premium output tokens to maintain profitability. Agentic AI Financial Risks: The transition t

Dan Temkin

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026