Video
End-to-End Tracing with OpenTelemetry and Kong
Instrument Kong Gateway, Kafka, and LLM calls using W3C Trace Context and the OpenTelemetry Collector.
Learn how to implement end-to-end distributed tracing with OpenTelemetry across Kong Gateway, Kafka, and AI workloads. We cover W3C Trace Context propagation, the OTel Collector pipeline, and how to expose traces with Kong’s OpenTelemetry plugin.
What you’ll learn:
- Why traces matter vs. metrics and logs
- W3C Trace Context across HTTP/gRPC and event-driven systems
- Kafka producer/consumer tracing and monitoring consumer lag
- Configuring the Kong OpenTelemetry plugin (traces/logs endpoints)
- OTel Collector: receivers, processors (PII scrubbing), exporters
- Auto vs. manual instrumentation trade-offs
- LLM observability: latency, token usage, model performance