Engineering
January 15, 2021
5 min read

Building Kong Clusters in AWS with the Terraform API Gateway Module

Kong

We created the Terraform API gateway module to help you follow DevOps best practices while implementing Kong using infrastructure as code (IaC). Terraform is an open source tool that allows you to implement IaC using a declarative declaration definition. This Terraform module is the reference platform maintained by Kong for potential and existing customers to quickly set up both Kong Gateway and Kong Enterprise for demo and PoC environments. You can extend this open source module to fit your production and enterprise needs.

In this article, or if you prefer, watch the below, we'll walk you through the entire process of setting up the Terraform module, including:

  • Provisioned AWS resources
  • Cloud and security best practices
  • Tuning your environment with variables

Set Up Prerequisites

The prerequisites are pretty minimal.

  • AWS VPC
  • Tag private and public subnets
  • Database Subnet Group
  • Cache Subnet Group (if enabling Redis)
  • An SSH Key
  • An SSL managed certificate

Beyond that, we've listed all the setup variables and how you can customize your Kong cluster in GitHub.

Instantiate the Terraform API Gateway Module

The only Terraform file you need is a main.tf.

First, define a Terraform provider, which needs to be AWS. The provider will rely on the AWS CLI configuration. With the above snippet, you're telling the provider that you want to:

  1. Build this API gateway cluster in the region of US West 2.
  2. Use the AWS CLL profile, called "dev." That will use your API key and secret key to access Amazon’s API for provisioning services.

Next, call the Kong module. In the above snippet, we're calling from GitHub, and we’re referencing a specific version. We recommend referencing a particular version to avoid issues from changes between releases.

Above, you see the minimal set of variables that you would need to define. These come from the prerequisites we listed earlier. And even though you may not use the Enterprise version, you still need to define those certs to something.

Lastly, supply some tags, as shown above, to note the resources that are going to be defined.

Create the Terraform Module Resource in AWS

To create the resources, run terraform init. Doing so will download a local copy of the module to your system and set up the environment.

Then, terraform plan -out kong.plan will take a look at what’s in your AWS environment in terms of the resources it’s going to use, like subnets in the subnet groups for databases and caching.

Next, terraform apply kong.plan will provision all necessary items. Terraform is stateful in that sense. If Terraform errors out or times out, it can refresh that state with a new plan and only provide the necessary changes. And likewise, if you start with a minimal configuration and then want to tweak that later on, you can then apply those specific changes without destroying and rebuilding everything from scratch.

It takes about 10 minutes to provision all the resources because the database, RDS, takes a while to create, configure, and run the database engine.

Terraform can completely manage everything. Instead of spending a bunch of money on storage, you can run terraform destroy. It will take your AWS account back to where it was before you ran the Kong module.

By default, Terraform stores states as local files. These are not related to Kong's enterprise offering. Move them to a remote state in production.

Options for Additional Security with Enterprise Edition

Suppose you don’t want to store your Enterprise license key or BinTray authentication in a Git repo. In that case, you can go into the systems manager parameters store and update the values (Systems Manager -> Parameter Store and /[service]/[environment]/ee/license).

We highly recommend using an AWS S3 bucket that is encrypted to store your state because there could be sensitive information stored in there. You don’t want that in your GitHub repository, and the S3 state is far more shareable from an enterprise, multiuser environment. And it’s going to be backed up to avoid problems.

The parameter store is a secure key value service in AWS. At rest, your license and Bintray credentials are going to be encrypted. We use an IAM instance profile, so the Kong nodes are only permitted to read those values. Once the module provisions everything, you can then use your EC2 instance key to SSH into one instance and then begin setting up APIs in Kong.

ssh -i [/path/to/key/specified/in/ec2_key_name] ubuntu@[ec2-instance]

Using EC2 Terraform to Provision Kong Instances

ec2.tf has a launch Terraform config and the auto-scaling group for Kong. Both will define the instance type and the image ID that we use to provision Kong and the security groups. The auto-scaling group defines the parameters around how many Kong instances we’re going to run and manage and what load balancers we’re going to associate with that. We’re also adding tags that get propagated at launch time.

Using Cloud Init to Configure Kong Instances

Cloud-init.tf passes in a template and applies a cloud-init configuration at the nodes' boot-up. Then we include a bunch of variables that can dynamically configure the actual host.

Cloud-init.cfg is very minimal. You can create a user and group for Kong to run Kong itself. You won't be adding additional packages that you don’t need. There are a minimal set of dependencies for both Kong and debugging tools that you can use.

Cloud-init.sh uses the AWS CLI to get parameters from that parameter store for all those secure values. That way, nothing sensitive gets written into the module or your instance of the module in cleartext. Here's what cloud-init.sh will do.

  1. Enable auto-updates to make sure that the hosts are secure.
  2. Install decK and Kong.
  3. Configure the database.
  4. Set up the Kong configuration file to dynamically pull values for you.
  5. Configure log rotation.
  6. Provide Kong’s health checks.
  7. Expose the admin API status via Kong on the proxy port.
  8. Restrict the endpoint to the VPC CIDR block so that the load balancers can check the Kong nodes' health without requiring authentication.
  9. Restart Kong and then your nodes are up and running successfully.

Leveraging Data from the Terraform API Gateway Module

You can use data.tf to see the data. The system pulls the data dynamically from your public and private subnet IDs, based on the subnet tag and a default security group that you can associate with your Kong nodes. On top of that, you can use a security group to allow access to the Kong-specific ports. It could be useful for a remote monitoring or alerting tool in your default security group.

Access Parameter Store Values With IAM.tf

IAM.tf provides Kong access to any secured parameter store values. It can use this to get parameters from the parameters store and decrypt them using the key you set up earlier. Each node has an associated IAM role policy, so it doesn’t require a username and password embedded somewhere. It’s leveraging AWS’s instance role profiles with the EC2 metadata to get a temporary token to securely access those resources.