Transforming Kong Logs for Ingestion into Your Observability Stack
Damon Sorrentino
As a Solutions Engineer here at Kong, one question that frequently comes across my desk is “how can I transform a Kong logging plugin message into a format that my insert-observability-stack-here understands, (e.g., ELK, Loki, Splunk, etc.)?”
In this blog, I’m going to show you how to easily accomplish converting a Kong logging payload to the Elastic Common Schema.
In order to accomplish this task, we’re going to be running Kong Gateway in Kubernetes and using two Kong plugins.
If you don’t already have an instance of Kong running in a Kubernetes cluster, connect to your cluster and run the following the commands to get one in seconds.
% kubectl create ns kong
% kubectl apply -f https://bit.ly/kong-ingress-dbless% kubectl get po -n kong -w
NAME READY STATUS RESTARTS AGE
ingress-kong-7c4b795d5d-f2lpt 0/2 ContainerCreating 0 1s
ingress-kong-7c4b795d5d-f2lpt 0/2 Running 0 1s
ingress-kong-7c4b795d5d-f2lpt 1/2 Running 0 10s
ingress-kong-7c4b795d5d-f2lpt 2/2 Running 0 20s
^C
First, create an empty Kubernetes manifest file called, elastic-common-schema.yaml.
Next, let’s define our KongPlugin resources. The first plugin we will create is the serverless pre-function. From the Kong plugin docs, a serverless pre-function plugin:
Runs before other plugins run during each phase. The pre-function plugin can be applied to individual services, routes, or globally.
Since we’re logging, we’re concerned with the log phase or “context”. For more information on all available plugin contexts, read this doc.
Paste the below yaml in your manifest.
The above resource definition creates a KongPlugin that executes before the logging phase of each plugin defined in scope. The kong.ctx.shared.mystuff=kong.log.serialize() is a single line of Lua code that stores the logging payload into a shared context. From the Kong docs, a shared context is:
A [Lua] table that has the same lifetime as the current request. This table is shared between all plugins. It can be used to share data between several plugins in a given request.
The key to doing the transformation is the custom_fields_by_lua configuration. From the Kong docs, the custom_fields_by_lua is:
A list of key-value pairs, where the key is the name of a log field and the value is a chunk of Lua code, whose return value sets or replaces the log field value.
% kubectl create ns myblog
% kubectl apply -f https://bit.ly/k8s-httpbin -n myblog% kubectl get po -n myblog -w
NAME READY STATUS RESTARTS AGE
httpbin-64cdb8c89c-7rxm2 1/1 Running 0 5s
^C
% kubectl logs ingress-kong-7c4b795d5d-pg2c6 -n kong -c proxy -f | grep "@timestamp"
# You should see something similar to the below
{"@timestamp":"1667427319","url":{"original":"/testing/anything"},"http":{"response":{"status_code":200},"request":{"body":{"bytes":93}}},"client_ip":"10.48.0.1"}
{"latencies":{"request":515,"kong":58,"proxy":457},"service":{"host":"httpbin.org","created_at":1614232642,"connect_timeout":60000,"id":"167290ee-c682-4ebf-bdea-e49a3ac5e260","protocol":"http","read_timeout":60000,"port":80,"path":"/anything","updated_at":1614232642,"write_timeout":60000,"retries":5,"ws_id":"54baa5a9-23d6-41e0-9c9a-02434b010b25"},"request":{"querystring":{},"size":138,"uri":"/log","url":"http://localhost:8000/log","headers":{"host":"localhost:8000","accept-encoding":"gzip, deflate","user-agent":"HTTPie/2.4.0","accept":"*/*","connection":"keep-alive"},"method":"GET"},"tries":[{"balancer_latency":0,"port":80,"balancer_start":1614232668399,"ip":"18.211.130.98"}],"client_ip":"192.168.144.1","workspace":"54baa5a9-23d6-41e0-9c9a-02434b010b25","upstream_uri":"/anything","response":{"headers":{"content-type":"application/json","date":"Thu, 25 Feb 2021 05:57:48 GMT","connection":"close","access-control-allow-credentials":"true","content-length":"503","server":"gunicorn/19.9.0","via":"kong/2.2.1.0-enterprise-edition","x-kong-proxy-latency":"57","x-kong-upstream-latency":"457","access-control-allow-origin":"*"},"status":200,"size":827},"route":{"id":"78f79740-c410-4fd9-a998-d0a60a99dc9b","paths":["/log"],"protocols":["http"],"strip_path":true,"created_at":1614232648,"ws_id":"54baa5a9-23d6-41e0-9c9a-02434b010b25","request_buffering":true,"updated_at":1614232648,"preserve_host":false,"regex_priority":0,"response_buffering":true,"https_redirect_status_code":426,"path_handling":"v0","service":{"id":"167290ee-c682-4ebf-bdea-e49a3ac5e260"}},"started_at":1614232668342}
Congratulations, you have transformed a Kong log payload into an Elastic Common Schema format ready for ingestion! This pattern can be used to easily transform Kong logging messages into any format for ingestion with any observability stack.
Full API Observability Unveiled: Gain Complete Visibility with Konnect
Architecture Overview
A multicloud DCGW architecture typically contains three main layers.
1\. Konnect Control Plane
The SaaS control plane manages configuration, plugins, and policies. All gateways connect securely to this layer.
2\. Dedicated C
Hugo Guerrero
Building Secure AI Agents with Kong's MCP Proxy and Volcano SDK
The example below shows how an AI agent can be built using Volcano SDK with minimal code, while still interacting with backend services in a controlled way. The agent is created by first configuring an LLM, then defining an MCP (Model Context Prot
Kong Gateway is an API gateway and a core component of the Kong Konnect platform . Built on a plugin-based extensibility model, it centralizes essential functions such as proxying, routing, load balancing, and health checking, efficiently manag
Claudio Acquaviva
Kong Simplifies Multicloud Cloud Gateways with Managed Redis Cache
Managed Redis cache is a turnkey "Shared State" add-on for Kong Dedicated Cloud Gateways. It is designed to combine the performance of an in-memory data store with the simplicity of a SaaS product. When you spin up a Dedicated Cloud Gateway in Kong
Amit Shah
AI Input vs. Output: Why Token Direction Matters for AI Cost Management
The Shifting Economic Landscape: The AI token economy in 2026 is evolving, and enterprise leaders must distinguish between low-cost input tokens and high-premium output tokens to maintain profitability. Agentic AI Financial Risks: The transition t
Dan Temkin
Metered Billing for APIs: Architecture, Telemetry, and Real-World Patterns
Imagine 47 million requests hitting your platform last month. Can you prove who made each one—and invoice with confidence? If that question tightens your stomach, you're not alone. Metered billing for APIs promises fair, transparent pricing that s
Kong
AI Observability: Monitoring and Troubleshooting Your LLM Infrastructure
AI observability extends traditional monitoring by adding behavioral telemetry for quality, safety, and cost metrics alongside standard logs, metrics, and traces Time-to-First-Token (TTFT) and token usage metrics are critical performance indicator
Kong
Ready to see Kong in action?
Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.