Maintain Your Kong Gateway Audit Log Trail in AWS CloudTrail Lake
Danny Freese
Senior Software Engineer on Konnect, Kong
A critical and challenging requirement for many organizations is meeting audit and compliance obligations. The goal of compliance is to secure business processes, sensitive data, and monitor for unauthorized activities or breaches.
AWS CloudTrail Lake now enables customers to record user activity from any source — application, infrastructure, and platform including virtual machines and containers — into CloudTrail Lake, making this a single source for immutable storage and query of audit logs (AWS News Blog). CloudTrail Lake records events in a standardized schema making it easier for end users to consume the data and quickly respond to any security incident or audit request.
By providing North-South Security, Kong Enterprise is the bridge between backend applications and the outside world. Because of its core functionality, it is vital to support the compliance efforts that have a direct impact on business security. Now, with the launch of partner integrations for AWS CloudTrail Lake, Kong Enterprise Audit Logs can be published, stored, and queried together with AWS and non-AWS activity events within the AWS console.
How does the integration work?
Audit Logging is an enterprise feature with Kong Enterprise. When enabled on the Global Control Plane, both Request Audits and Database Audits are accessible through the Kong Admin API. Request Audits provide audit log entries for each HTTP request made to the Admin API. Database Audits provide audit log entries for each database creation, update, or deletion. More detailed information can be found at Kong Enterprise Admin API Audit Log.
For the Kong-CloudTrail integration, first the customer creates a channel for Kong to deliver events to your event store. The channel should be located in the same AWS region where the Kong Global Control Plane resides. See the documentation, Create an integration with an event source outside of AWS, for more information.
Once the channel has been created, the next step is to set up the additional infrastructure. The additional AWS infrastructure deploys a Lambda function and ElastiCache-Redis into the existing VPC where the Kong Global Control Plane resides. The lambda function will call the /audit/requests endpoint to retrieve Request Audit Log entries, duplicates are then removed by evaluating each audit log entry against existing keys logged in Redis before submitting the logs to CloudTrail Lake. Each audit log entry in Kong has a defined TTL. When the TTL is reached, the entry will be deleted from Kong and similarly expire in Redis. Finally, AWS CloudWatch is used to schedule the lambda function so that it will process audit logs hourly.
Kong CloudTrail Lake integration deployment
High-level the requirements for kicking off this integration are:
Create a Kong channel ARN to your event store. The channel should be created in the same AWS Region where the Kong Global Control Plane exists. For more information, refer to the documentation, Create an integration with event store outside AWS.
Enabled and configured audit logs on the Kong Enterprise Control Plane.
Be able to build out the additional AWS infrastructure with the terraform script provided by Kong.
Enable and configure audit logging on the Kong Global Control Plane
Audit logging is disabled by default. It is configurable via the Kong configuration (e.g. kong.conf):
```bash
audit_log = on # audit logging is enabled
```
This can generate more audit logs than you are interested in and it may be desirable to ignore certain requests. To this end, the audit_log_ignore_methods and audit_log_ignore_paths configuration options are presented:
audit_log_ignore_methods = GET,OPTIONS
# do not generate an audit log entry for GET or OPTIONS HTTP requestsaudit_log_ignore_paths = /foo,/status,^/services,/routes$,/one/.+/two,/upstreams/
# do not generate an audit log entry for requests that match the above regular expressions
With the channel ARN created, and the audit logging enabled on Kong Enterprise, you are ready to deploy the additional AWS components to complete the integration.
Set up the Terraform tfvars
Here is a brief overview of the parameters required and optional for the Terraform tfvars file. A more detailed description can be found in the Kong CloudTrail Integration GitHub Documentation.
- existing_vpc and existing_subnet_ids : required to deploy Elasticache and the Lambda function.
- security_group : a security group will be created. It allows for the Lambda function to reach out to Kong, Redis, and CloudTrail Lake.
- lambda_env: configurable environment variables on the Lambda Function. Many are optional. For more information review the documentation in GitHub for more details.
- image: the kong/cloudtrails-integration image is publicly available in DockerHub.
- resource_name: name that will be provisioned to all resources.
With the tfvars file ready, the terraform execution plan can be created and apply:
Step 2 - Navigate to terraform/ in this repo and spin up the additional infrastructure:
terraform init
terraform plan -out=plan.out -var-file 'my-vars.tvars'terraform apply "plan.out"
Verify the integration was successful
Below is a quick review on how to validate the infrastructure successfully installed and view the Kong audit logs in CloudTrail Lake.
AWS Infrastructure
AWS Lambda - navigate to the AWS Lambda console and validate that a “kong-ct-integration” lambda function exists and review the environment variables.
AWS CloudWatch - navigate to the AWS CloudWatch (or AWS EventBridge) console, navigate to the Rules, and validate that a “kong-ct-integration” rule exists, this is to schedule the lambda function.
AWS ElasticCache - navigate to the AWS ElastiCache console, navigate to Redis Clusters, and validate that a “kong-ct-integration” cluster exists.
AWS CloudTrail Lake logs
With the infrastructure validated, the lambda function will be triggered hourly by CloudWatch. Once the lambda function has been triggered, logs should be present in the CloudTrail event store. In order to view the logs, in the AWS console navigate to CloudTrail → Lake and in the Editor Tab you can query the logs in the event data store.
What data are we submitting to CloudTrail Lake ?
Let's review the mapping of an audit log entry to eventData in the data store.
The Kong request API audit log entries provide the following information:
In the event store, the eventData Field is populated with the data retrieved from Kong. Below is a sample of the Kong audit log transformed into the event data object. What is important to know is that not a single piece of information from the original audit log entry is lost. It is all mapped to the eventData object.
{ version=2.8.1.1-enterprise-edition, useridentity={type=, principalid=anonymous, details={RBAC=Anonymous User on Kong Gateway: Please Enable RBAC on Kong Gateway}}, useragent=null, eventsource=KongGatewayEnterprise, eventname=GET/event-hooks/, eventtime=2022-09-1513:59:38.000, uid=8siS4h7qIzhn9op2XNUx4WnnFO1nxJj5, requestparameters={queryParameters=size=100}, responseelements=null, errorcode=null, errormessage=null, sourceipaddress=12.34.56.789, recipientaccountid=123456789, additionaleventdata={ workspace=cca82c73-2365-441b-8860-9e074d93b205, konghostname=http://mykong-gateway.com:8001, method=GET, signature=, ttl=2511498, status=200}},...
}
Conclusion
With the Kong CloudTrail Lake Integration, the objective is to simplify compliance efforts by hosting the Kong Gateway Audit logs in AWS, alongside the rest of your AWS Infrastructure and event activity. You can learn more about this integration at the github repo: Kong CloudTrail Integration and AWS CloudTrail.
Now you can find and purchase Kong Konnect through the Google Cloud Marketplace! Kong Konnect is the unified API platform that allows you to manage multiple gateways across service meshes, ingress, cloud, and Kubernetes providers no matter where t
Erin Choi
Kong Konnect RESTful Admin APIs and AWS AppSync GraphQL Services - Part I: Query
GraphQL is a query language to enable applications to fetch data from servers. In fact, as it isn't tied to any specific database or storage engine, GraphQL can aggregate data from multiple sources to create a natural representation of your data.
Claudio Acquaviva
Kong Konnect Runtime Instance and Konnect-KIC AWS EKS Terraform Blueprints Addons
With our AWS partnership, we jointly created two Kong Konnect AWS EKS Terraform Blueprints AddOns, eks-blueprint-konnect-runtime-instance and eks-blueprint-konnect-kic, to help bootstrap your Kong Konnect instances on EKS. In this post, we'll discu
Danny Freese
Get Gravitas and Go Amazonian: Kong Validated for AWS Graviton3, Amazon Linux 2023 OS
Today, we're thrilled to announce that Kong Enterprise and Kong Konnect Data Planes are now validated to run on AWS Graviton3 processors and Amazon Linux 2023 OS. As an APN Advanced Tier Partner of AWS, we were delighted to have the opportunity to
Claudio Acquaviva
Reach for the Clouds: A Crawl/Walk/Run Strategy with Kong and AWS
I once heard someone say, "What the cloud migration strategies lack at the moment is a methodology to Lift-and-Shift connections to the cloud." Let's digest that. In today's landscape, maintaining a competitive edge and delivering a high-quality cus
Danny Freese
Migration Options for IBM Cloud API Gateway Customers
IBM recently announced the deprecation of its Cloud API Gateway, a service used to create and manage APIs by placing a gateway in front of existing IBM Cloud endpoints. With this move, IBM Cloud Functions and IBM Cloud Foundry are no longer able to
In this episode of Kongcast , Matt Stratton , a staff developer advocate at Pulumi , explains the history of configuration automation, the world of cloud engineering and how it compares to DevOps. Check out the transcript and video from our conve
Viktor Gamov
Ready to see Kong in action?
Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.