See what makes Kong the fastest, most-adopted API gateway
Check out the latest Kong feature releases and updates
Single platform for SaaS end-to-end connectivity
Enterprise service mesh based on Kuma and Envoy
Collaborative API design platform
How to Scale High-Performance APIs and Microservices
Call for speakers & sponsors, Kong API Summit 2023!
5 MIN READ
A critical and challenging requirement for many organizations is meeting audit and compliance obligations. The goal of compliance is to secure business processes, sensitive data, and monitor for unauthorized activities or breaches.
AWS CloudTrail Lake now enables customers to record user activity from any source — application, infrastructure, and platform including virtual machines and containers — into CloudTrail Lake, making this a single source for immutable storage and query of audit logs (AWS News Blog). CloudTrail Lake records events in a standardized schema making it easier for end users to consume the data and quickly respond to any security incident or audit request.
By providing North-South Security, Kong Enterprise is the bridge between backend applications and the outside world. Because of its core functionality, it is vital to support the compliance efforts that have a direct impact on business security. Now, with the launch of partner integrations for AWS CloudTrail Lake, Kong Enterprise Audit Logs can be published, stored, and queried together with AWS and non-AWS activity events within the AWS console.
Audit Logging is an enterprise feature with Kong Enterprise. When enabled on the Global Control Plane, both Request Audits and Database Audits are accessible through the Kong Admin API. Request Audits provide audit log entries for each HTTP request made to the Admin API. Database Audits provide audit log entries for each database creation, update, or deletion. More detailed information can be found at Kong Enterprise Admin API Audit Log.
For the Kong-CloudTrail integration, first the customer creates a channel for Kong to deliver events to your event store. The channel should be located in the same AWS region where the Kong Global Control Plane resides. See the documentation, Create an integration with an event source outside of AWS, for more information.
Once the channel has been created, the next step is to set up the additional infrastructure. The additional AWS infrastructure deploys a Lambda function and ElastiCache-Redis into the existing VPC where the Kong Global Control Plane resides. The lambda function will call the /audit/requests endpoint to retrieve Request Audit Log entries, duplicates are then removed by evaluating each audit log entry against existing keys logged in Redis before submitting the logs to CloudTrail Lake. Each audit log entry in Kong has a defined TTL. When the TTL is reached, the entry will be deleted from Kong and similarly expire in Redis. Finally, AWS CloudWatch is used to schedule the lambda function so that it will process audit logs hourly.
High-level the requirements for kicking off this integration are:
Audit logging is disabled by default. It is configurable via the Kong configuration (e.g. kong.conf):
audit_log = on # audit logging is enabled
This can generate more audit logs than you are interested in and it may be desirable to ignore certain requests. To this end, the audit_log_ignore_methods and audit_log_ignore_paths configuration options are presented:
audit_log_ignore_methods = GET,OPTIONS # do not generate an audit log entry for GET or OPTIONS HTTP requests audit_log_ignore_paths = /foo,/status,^/services,/routes$,/one/.+/two,/upstreams/ # do not generate an audit log entry for requests that match the above regular expressions
For more information on Audit Log Configuration please refer to the documentation, Kong Gateway Admin API Audit Logging.
With the channel ARN created, and the audit logging enabled on Kong Enterprise, you are ready to deploy the additional AWS components to complete the integration.
Here is a brief overview of the parameters required and optional for the Terraform tfvars file. A more detailed description can be found in the Kong CloudTrail Integration GitHub Documentation.
existing_vpc = "vpc-uzcrqlyml0mdejmduvy" existing_subnet_ids = ["subnet-zmuavkc6xnatd7cd1bm", "subnet-7n4ae9ua3uhjw5dhgzx", "subnet-kan0csgez5lwh5ancl0"] security_group = "kong-ct-sg" lambda_env = { KONG_ADMIN_API = "https://ec2-5-531-26-7.compute-1.amazonaws.com:8444" KONG_SUPERADMIN = true KONG_ADMIN_TOKEN = "test" KONG_ROOT_CA = "-----BEGIN CERTIFICATE-----content-----END CERTIFICATE-----" REDIS_DB = 0 CHANNEL_ARN = "arn:aws:cloudtrail:us-east-1:123456789651:channel/07441ab6-c4a1-4c8a-943d-a2f0c50c8a76" } channel_arn = "arn:aws:cloudtrail:us-east-1:123456789651:channel/07441ab6-c4a1-4c8a-943d-a2f0c50c8a76" image = "kong/cloudtrails-integration:1.0.0" resource_name = "kong-ct-integration"
With the tfvars file ready, the terraform execution plan can be created and apply:
Step 1 – Export AWS variables:
export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY=
Step 2 – Navigate to terraform/ in this repo and spin up the additional infrastructure:
terraform init terraform plan -out=plan.out -var-file 'my-vars.tvars' terraform apply "plan.out"
Below is a quick review on how to validate the infrastructure successfully installed and view the Kong audit logs in CloudTrail Lake.
With the infrastructure validated, the lambda function will be triggered hourly by CloudWatch. Once the lambda function has been triggered, logs should be present in the CloudTrail event store. In order to view the logs, in the AWS console navigate to CloudTrail → Lake and in the Editor Tab you can query the logs in the event data store.
Let’s review the mapping of an audit log entry to eventData in the data store.
The Kong request API audit log entries provide the following information:
{ "client_ip": "12.34.56.789", "signature": null, "removed_from_payload": null, "status": 200, "ttl": 2511498, "rbac_user_id": null, "path": "/event-hooks/?size=100", "payload": null, "request_timestamp": 1663014672, "workspace": "cca82c73-2365-441b-8860-9e074d93b205", "method": "GET", "request_id": "8siS4h7qIzhn9op2XNUx4WnnFO1nxJj5" }
In the event store, the eventData Field is populated with the data retrieved from Kong. Below is a sample of the Kong audit log transformed into the event data object. What is important to know is that not a single piece of information from the original audit log entry is lost. It is all mapped to the eventData object.
{ version=2.8.1.1-enterprise-edition, useridentity={type=, principalid=anonymous, details={RBAC=Anonymous User on Kong Gateway: Please Enable RBAC on Kong Gateway}}, useragent=null, eventsource=KongGatewayEnterprise, eventname=GET/event-hooks/, eventtime=2022-09-15 13:59:38.000, uid=8siS4h7qIzhn9op2XNUx4WnnFO1nxJj5, requestparameters={queryParameters=size=100}, responseelements=null, errorcode=null, errormessage=null, sourceipaddress=12.34.56.789, recipientaccountid=123456789, additionaleventdata={ workspace=cca82c73-2365-441b-8860-9e074d93b205, konghostname=http://mykong-gateway.com:8001, method=GET, signature=, ttl=2511498, status=200} }, ... }
With the Kong CloudTrail Lake Integration, the objective is to simplify compliance efforts by hosting the Kong Gateway Audit logs in AWS, alongside the rest of your AWS Infrastructure and event activity. You can learn more about this integration at the github repo: Kong CloudTrail Integration and AWS CloudTrail.
Share Post
Learn how to make your API strategy a competitive advantage.