Transforming Kong Logs for Ingestion into Your Observability Stack
As a Solutions Engineer here at Kong, one question that frequently comes across my desk is “how can I transform a Kong logging plugin message into a format that my insert-observability-stack-here understands, i.e. ELK, Loki, Splunk, etc.?” In this blog, I’m going to show you how to easily accomplish converting a Kong logging payload to the Elastic Common Schema.
In order to accomplish this task, we’re going to be running Kong Gateway in Kubernetes and using two Kong plugins.
First, create an empty Kubernetes manifest file called, elastic-common-schema.yaml.
Next, let’s define our KongPlugin resources. The first plugin we will create is the serverless pre-function. From the Kong plugin docs, a serverless pre-function plugin:
Runs before other plugins run during each phase. The pre-function plugin can be applied to individual services, routes, or globally.
Since we’re logging, we’re concerned with the log phase or “context”. For more information on all available plugin contexts, read this doc.
Paste the below yaml in your manifest.
The above resource definition creates a KongPlugin that executes before the logging phase of each plugin defined in scope. The kong.ctx.shared.mystuff=kong.log.serialize() is a single line of Lua code that stores the logging payload into a shared context. From the Kong docs, a shared context is:
A [Lua] table that has the same lifetime as the current request. This table is shared between all plugins. It can be used to share data between several plugins in a given request.
The second plugin we will create is the file log plugin, which is the actual workhorse that does the transformation. Kong logging plugins (File Log, HTTP Log, TCP Log) use a common format. Copy and paste the below code under the pre-function in your manifest.
The key to doing the transformation is the custom_fields_by_lua configuration. From the Kong docs, the custom_fields_by_lua is:
A list of key-value pairs, where the key is the name of a log field and the value is a chunk of Lua code, whose return value sets or replaces the log field value.
The first few key-value pairs instruct the plugin to remove individual fields from the payload. For example, consumer: return nil, tells the File Log plugin to remove the consumer field from the logged payload. The '@timestamp': return string.format('%10.0f', os.time()) line tells the File Log plugin to add the ECS field, @timestamp, to the logged payload.
The most complex use case is nesting objects that pull from existing payload data. In order to do this, we must return a Lua table from each field configuration. Let’s examine the following snippet.
This snippet adds the ECS http field to the payload. The first line defines the variable log_payload and assigns it the value we cached in the pre-function plugin, i.e. kong.ctx.shared.mystuff. The return block returns a nested table as defined by the ECS fields, http.request.body.bytes and http.response.status_code.
Now that we have our Kong plugins defined, we need to put it all together with Kubernetes Deployment, Service and Ingress resources.
First, we need to deploy a sample service we can proxy with Kong. Execute the following command which will deploy pods and a service for httpbin inside your cluster.
All we have to do now is deploy an Ingress object which will create a route inside the Kong Gateway and associate all of the plugins we created previously.
Paste the above definition into your manifest and apply:
Now, let’s invoke the service via Kong using curl. In my cluster:
You should now see similar output from the httpbin service:
Let’s examine the logs of the Kong proxy:
In the above example, execute the following:
In another terminal window, trying executing the above curl command and watch the logs in a live tail.
This is the original Kong logging payload:
which gets transformed into:
Congratulations, you have transformed a Kong log payload into an Elastic Common Schema format ready for ingestion! This pattern can be used to easily transform Kong logging messages into any format for ingestion with any observability stack.