Kong Enterprise 3.3 delivers enhanced security, usability, and platform reliability. Learn more

Transforming Kong Logs for Ingestion into Your Observability Stack

As a Solutions Engineer here at Kong, one question that frequently comes across my desk is “how can I transform a Kong logging plugin message into a format that my insert-observability-stack-here understands, i.e. ELK, Loki, Splunk, etc.?” In this blog, I’m going to show you how to easily accomplish converting a Kong logging payload to the Elastic Common Schema.

In order to accomplish this task, we’re going to be running Kong Gateway in Kubernetes and using two Kong plugins.

  1. Serverless Pre-function
  2. File Log

If you don’t already have an instance of Kong running in a Kubernetes cluster, connect to your cluster and run the following the commands to get one in seconds.

% kubectl create ns kong
% kubectl apply -f https://bit.ly/kong-ingress-dbless
% kubectl get po -n kong -w

NAME                            READY   STATUS              RESTARTS   AGE
ingress-kong-7c4b795d5d-f2lpt   0/2     ContainerCreating   0          1s
ingress-kong-7c4b795d5d-f2lpt   0/2     Running             0          1s
ingress-kong-7c4b795d5d-f2lpt   1/2     Running             0          10s
ingress-kong-7c4b795d5d-f2lpt   2/2     Running             0          20s

See how to install Kong in Kubernetes for more information. Once you have an available instance of the Kong Gateway, continue.

First, create an empty Kubernetes manifest file called, elastic-common-schema.yaml.

Next, let’s define our KongPlugin resources. The first plugin we will create is the serverless pre-function. From the Kong plugin docs, a serverless pre-function plugin:

Runs before other plugins run during each phase. The pre-function plugin can be applied to individual services, routes, or globally.

Since we’re logging, we’re concerned with the log phase or “context”. For more information on all available plugin contexts, read this doc.

Paste the below yaml in your manifest.

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
  name: pre-function
plugin: pre-function
  - kong.ctx.shared.mystuff=kong.log.serialize()

The above resource definition creates a KongPlugin that executes before the logging phase of each plugin defined in scope. The kong.ctx.shared.mystuff=kong.log.serialize() is a single line of Lua code that stores the logging payload into a shared context. From the Kong docs, a shared context is:

A [Lua] table that has the same lifetime as the current request. This table is shared between all plugins. It can be used to share data between several plugins in a given request.

For more info on shared contexts, see this doc.

The second plugin we will create is the file log plugin, which is the actual workhorse that does the transformation. Kong logging plugins (File Log, HTTP Log, TCP Log) use a common format. Copy and paste the below code under the pre-function in your manifest.

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
  name: file-log
plugin: file-log
  path: /dev/stdout
    consumer: return nil
    service: return nil
    tries: return nil
    latencies: return nil
    authenticated_entity: return nil
    route: return nil
    request: return nil
    response: return nil
    upstream_uri: return nil
    started_at: return nil
    workspace:  return nil
    '@timestamp': return string.format('%10.0f', os.time())
    url: |
      local log_payload=kong.ctx.shared.mystuff
      return {original=log_payload['request']['uri']}
    http: |
      local log_payload=kong.ctx.shared.mystuff
      return {

The key to doing the transformation is the custom_fields_by_lua configuration. From the Kong docs, the custom_fields_by_lua is:

A list of key-value pairs, where the key is the name of a log field and the value is a chunk of Lua code, whose return value sets or replaces the log field value.

The first few key-value pairs instruct the plugin to remove individual fields from the payload. For example, consumer: return nil, tells the File Log plugin to remove the consumer field from the logged payload. The '@timestamp': return string.format('%10.0f', os.time()) line tells the File Log plugin to add the ECS field, @timestamp, to the logged payload.

The most complex use case is nesting objects that pull from existing payload data. In order to do this, we must return a Lua table from each field configuration. Let’s examine the following snippet.

http: |
      local log_payload=kong.ctx.shared.mystuff
      return {

This snippet adds the ECS http field to the payload. The first line defines the variable log_payload and assigns it the value we cached in the pre-function plugin, i.e. kong.ctx.shared.mystuff. The return block returns a nested table as defined by the ECS fields, http.request.body.bytes and http.response.status_code.

Now that we have our Kong plugins defined, we need to put it all together with Kubernetes Deployment, Service and Ingress resources.

First, we need to deploy a sample service we can proxy with Kong. Execute the following command which will deploy pods and a service for httpbin inside your cluster.

% kubectl create ns myblog
% kubectl apply -f https://bit.ly/k8s-httpbin -n myblog
% kubectl get po -n myblog -w

NAME                       READY   STATUS    RESTARTS   AGE
httpbin-64cdb8c89c-7rxm2   1/1     Running   0          5s

All we have to do now is deploy an Ingress object which will create a route inside the Kong Gateway and associate all of the plugins we created previously.

apiVersion: networking.k8s.io/v1
kind: Ingress
  name: httpbin
    konghq.com/strip-path: 'true'
    kubernetes.io/ingress.class: kong
    konghq.com/plugins: file-log, pre-function
  - http:
      - backend:
            name: httpbin
              number: 80
        pathType: ImplementationSpecific
        path: /testing

Paste the above definition into your manifest and apply:

% kubectl apply -f elastic-common-schema.yaml -n myblog

Now, let’s invoke the service via Kong using curl. In my cluster:

% kubectl get svc -n kong

NAME                      TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
kong-proxy                LoadBalancer   80:31425/TCP,443:32037/TCP   41m
kong-validation-webhook   ClusterIP   <none>          443/TCP                      41m

% curl

You should now see similar output from the httpbin service:

  "args": {},
  "data": "",
  "files": {},
  "form": {},
  "headers": {
    "Accept": "*/*",
    "Connection": "keep-alive",
    "Host": "",
    "User-Agent": "curl/7.79.1",
    "X-Forwarded-Host": "",
    "X-Forwarded-Path": "/testing/anything",
    "X-Forwarded-Prefix": "/testing"
  "json": null,
  "method": "GET",
  "origin": "",
  "url": ""

Let’s examine the logs of the Kong proxy:

# get the kong pod name
% kubectl get po -n kong

NAME                            READY   STATUS    RESTARTS   AGE
ingress-kong-7c4b795d5d-pg2c6   2/2     Running   0          38m

In the above example, execute the following:

% kubectl logs ingress-kong-7c4b795d5d-pg2c6 -n kong -c proxy -f | grep "@timestamp"

# You should see something similar to the below

In another terminal window, trying executing the above curl command and watch the logs in a live tail.

This is the original Kong logging payload:

  "latencies": {
    "request": 515,
    "kong": 58,
    "proxy": 457
  "service": {
    "host": "httpbin.org",
    "created_at": 1614232642,
    "connect_timeout": 60000,
    "id": "167290ee-c682-4ebf-bdea-e49a3ac5e260",
    "protocol": "http",
    "read_timeout": 60000,
    "port": 80,
    "path": "/anything",
    "updated_at": 1614232642,
    "write_timeout": 60000,
    "retries": 5,
    "ws_id": "54baa5a9-23d6-41e0-9c9a-02434b010b25"
  "request": {
    "querystring": {},
    "size": 138,
    "uri": "/log",
    "url": "http://localhost:8000/log",
    "headers": {
      "host": "localhost:8000",
      "accept-encoding": "gzip, deflate",
      "user-agent": "HTTPie/2.4.0",
      "accept": "*/*",
      "connection": "keep-alive"
    "method": "GET"
  "tries": [
      "balancer_latency": 0,
      "port": 80,
      "balancer_start": 1614232668399,
      "ip": ""
  "client_ip": "",
  "workspace": "54baa5a9-23d6-41e0-9c9a-02434b010b25",
  "upstream_uri": "/anything",
  "response": {
    "headers": {
      "content-type": "application/json",
      "date": "Thu, 25 Feb 2021 05:57:48 GMT",
      "connection": "close",
      "access-control-allow-credentials": "true",
      "content-length": "503",
      "server": "gunicorn/19.9.0",
      "via": "kong/",
      "x-kong-proxy-latency": "57",
      "x-kong-upstream-latency": "457",
      "access-control-allow-origin": "*"
    "status": 200,
    "size": 827
  "route": {
    "id": "78f79740-c410-4fd9-a998-d0a60a99dc9b",
    "paths": [
    "protocols": [
    "strip_path": true,
    "created_at": 1614232648,
    "ws_id": "54baa5a9-23d6-41e0-9c9a-02434b010b25",
    "request_buffering": true,
    "updated_at": 1614232648,
    "preserve_host": false,
    "regex_priority": 0,
    "response_buffering": true,
    "https_redirect_status_code": 426,
    "path_handling": "v0",
    "service": {
      "id": "167290ee-c682-4ebf-bdea-e49a3ac5e260"
  "started_at": 1614232668342

which gets transformed into:

  "@timestamp": "1667427319",
  "url": {
    "original": "/testing/anything"
  "http": {
    "response": {
      "status_code": 200
    "request": {
      "body": {
        "bytes": 93
  "client_ip": ""

Congratulations, you have transformed a Kong log payload into an Elastic Common Schema format ready for ingestion! This pattern can be used to easily transform Kong logging messages into any format for ingestion with any observability stack.

Share Post

Subscribe to Our Newsletter!

    How to Scale High-Performance APIs and Microservices

    Learn how to make your API strategy a competitive advantage.

    June 20, 2023 8:00 AM (PT) Register Now