kong-summit-glow

By on August 28, 2018

Enabling Tracing with Zipkin – Kong CE 0.14.1 Feature Highlight

Kong recently released CE 0.14.1 to build upon CE 0.14 with several improvements and minor fixes. As part of our series helping you get up to speed on our newest features, we want to dive into another important plugin we’ve created to improve your understanding of your infrastructure – Kong’s Zipkin Plugin.

As organizations move towards microservices, understanding network latency becomes a critical component of ensuring high performance. Each microservice talks to each other via the network and Kong’s new Zipkin plugin allows Kong users to troubleshoot latency problems within their architecture by tracking the duration between API calls.

By using the Zipkin plugin to measure latency, Kong users can identify issues within their services and expedite debugging. When enabled, the Zipkin plugin traces requests in a way compatible with Zipkin – propagating distributed tracing spans and reporting them to a Zipkin server. The code revolves around an opentracing core using the opentracing-lua library to collect timing data for requests in each of Kong’s phases.

The plugin uses opentracing-lua compatible “extractor”, “injector”, and “reporters” to implement Zipkin’s protocols. When a request comes in, the extractor collects information on it, and if no opentrace ID is present, generates a trace ID using a probabilistic model based on the sample_ratio configuration value. When requests are ready to be sent out, the injector adds trace information to the outbound request. When data is collected, the plugin sends a batch to a Zipkin server using the Zipkin v2 API. This plugin follows Zipkin’s “B3” specification as to which HTTP headers to use. It also supports Jaegar-style uberctx- headers for propagating baggage. For instance, we might configure the Zipkin plugin on a Service by making the following request:

$ curl -X POST http://kong:8001/services/{service}/plugins \
--data "name=zipkin"  \
--data "config.http_endpoint=http://your.zipkin.collector:9411/api/v2/spans" \
--data "config.sample_ratio=0.001"

Or we could enable it on a consumer like so:

$ curl -X POST http://kong:8001/plugins \
--data "name=zipkin" \    --data "consumer_id={consumer_id}"  \
--data "config.http_endpoint=http://your.zipkin.collector:9411/api/v2/spans" \
--data "config.sample_ratio=0.001"

Through the use of tracing and Kong’s Zipkin plugin, organizations can easily pinpoint latency issues to optimize performance. As architectures become increasingly complex and services increasingly use the network to communicate, identifying slowdown areas can make a big difference in overall system performance. With Kong, every request can be traced and understood instantly, reducing the time spent on investigating issues and improving workflows.

At Kong, we’re committed to equipping our users with the best possible solutions for their microservice and API needs. To test drive the Zipkin plugin as part of our Enterprise Edition, start your free trial today! And of course, thank you to our open source contributors, core maintainers ( @thibaultcha @hisham @bungle @kikito @james_callahan) and other Kong Inc. employees who all made huge contributions to this release!

 

Happy Konging!