# Building Kong Clusters in AWS with the Terraform API Gateway Module
Kong
We created the Terraform API gateway module to help you follow DevOps best practices while implementing Kong using infrastructure as code (IaC). Terraform is an open source tool that allows you to implement IaC using a declarative declaration definition. This Terraform module is the reference platform maintained by Kong for potential and existing customers to quickly set up both [Kong Gateway](https://konghq.com/kong)Kong Gateway and [Kong Enterprise](https://konghq.com/products/kong-enterprise)Kong Enterprise for demo and PoC environments. You can extend this open source module to fit your production and enterprise needs.
In this article, or if you prefer, watch the below, we'll walk you through the entire process of setting up the Terraform module, including:
First, define a Terraform provider, which needs to be AWS. The provider will rely on the AWS CLI configuration. With the above snippet, you're telling the provider that you want to:
- Use the AWS CLL profile, called "dev." That will use your API key and secret key to access Amazon’s API for provisioning services.
Next, call the Kong module. In the above snippet, we're calling from GitHub, and we’re referencing a specific version. We recommend referencing a particular version to avoid issues from changes between releases.
Above, you see the minimal set of variables that you would need to define. These come from the prerequisites we listed earlier. And even though you may not use the Enterprise version, you still need to define those certs to something.
Lastly, supply some tags, as shown above, to note the resources that are going to be defined.
## **Create the Terraform Module Resource in AWS**
To create the resources, run terraform init. Doing so will download a local copy of the module to your system and set up the environment.
Then, terraform plan -out kong.plan will take a look at what’s in your AWS environment in terms of the resources it’s going to use, like subnets in the subnet groups for databases and caching.
Next, terraform apply kong.plan will provision all necessary items. Terraform is stateful in that sense. If Terraform errors out or times out, it can refresh that state with a new plan and only provide the necessary changes. And likewise, if you start with a minimal configuration and then want to tweak that later on, you can then apply those specific changes without destroying and rebuilding everything from scratch.
It takes about 10 minutes to provision all the resources because the database, RDS, takes a while to create, configure, and run the database engine.
Terraform can completely manage everything. Instead of spending a bunch of money on storage, you can run terraform destroy. It will take your AWS account back to where it was before you ran the Kong module.
By default, Terraform stores states as local files. These are not related to Kong's enterprise offering. Move them to a remote state in production.
### ***Options for Additional Security with Enterprise Edition***
Suppose you don’t want to store your Enterprise license key or BinTray authentication in a Git repo. In that case, you can go into the systems manager parameters store and update the values (Systems Manager -> Parameter Store and /[service]/[environment]/ee/license).
We highly recommend using an AWS S3 bucket that is encrypted to store your state because there could be sensitive information stored in there. You don’t want that in your GitHub repository, and the S3 state is far more shareable from an enterprise, multiuser environment. And it’s going to be backed up to avoid problems.
The parameter store is a secure key value service in AWS. At rest, your license and Bintray credentials are going to be encrypted. We use an IAM instance profile, so the Kong nodes are only permitted to read those values. Once the module provisions everything, you can then use your EC2 instance key to SSH into one instance and then begin setting up APIs in Kong.
## **Using EC2 Terraform to Provision Kong Instances**
[ec2.tf](https://github.com/Kong/kong-terraform-aws/blob/main/ec2.tf)ec2.tf has a launch Terraform config and the auto-scaling group for Kong. Both will define the instance type and the image ID that we use to provision Kong and the security groups. The auto-scaling group defines the parameters around how many Kong instances we’re going to run and manage and what load balancers we’re going to associate with that. We’re also adding tags that get propagated at launch time.
## **Using Cloud Init to Configure Kong Instances**
- Enable auto-updates to make sure that the hosts are secure.
- Install decK and Kong.
- Configure the database.
- Set up the Kong configuration file to dynamically pull values for you.
- Configure log rotation.
- Provide Kong’s health checks.
- Expose the admin API status via Kong on the proxy port.
- Restrict the endpoint to the VPC CIDR block so that the load balancers can check the Kong nodes' health without requiring authentication.
- Restart Kong and then your nodes are up and running successfully.
## **Leveraging Data from the Terraform API Gateway Module**
You can use [data.tf](https://github.com/Kong/kong-terraform-aws/blob/main/data.tf)data.tf to see the data. The system pulls the data dynamically from your public and private subnet IDs, based on the subnet tag and a default security group that you can associate with your Kong nodes. On top of that, you can use a security group to allow access to the Kong-specific ports. It could be useful for a remote monitoring or alerting tool in your default security group.
## **Access Parameter Store Values With IAM.tf**
[IAM.tf](https://github.com/Kong/kong-terraform-aws/blob/main/iam.tf)IAM.tf provides Kong access to any secured parameter store values. It can use this to get parameters from the parameters store and decrypt them using the key you set up earlier. Each node has an associated IAM role policy, so it doesn’t require a username and password embedded somewhere. It’s leveraging AWS’s instance role profiles with the EC2 metadata to get a temporary token to securely access those resources.
Managed Redis cache is a turnkey "Shared State" add-on for Kong Dedicated Cloud Gateways. It is designed to combine the performance of an in-memory data store with the simplicity of a SaaS product. When you spin up a Dedicated Cloud Gateway in Kong
GraphQL is a query language to enable applications to fetch data from servers. In fact, as it isn't tied to any specific database or storage engine, GraphQL can aggregate data from multiple sources to create a natural representation of your data.
Today, we're thrilled to announce that Kong Enterprise and Kong Konnect Data Planes are now validated to run on AWS Graviton3 processors and Amazon Linux 2023 OS. As an APN Advanced Tier Partner of AWS, we were delighted to have the opportunity to
API gateways and load balancers are useful tools for building modern applications. While they have some functionality overlaps, they're distinct tools with different purposes and use cases. In this article, we'll discuss the differences between API
I once heard someone say, "What the cloud migration strategies lack at the moment is a methodology to Lift-and-Shift connections to the cloud." Let's digest that. In today's landscape, maintaining a competitive edge and delivering a high-quality cus
A critical and challenging requirement for many organizations is meeting audit and compliance obligations. The goal of compliance is to secure business processes, sensitive data, and monitor for unauthorized activities or breaches.
AWS CloudTrail
IBM recently announced the deprecation of its Cloud API Gateway, a service used to create and manage APIs by placing a gateway in front of existing IBM Cloud endpoints. With this move, IBM Cloud Functions and IBM Cloud Foundry are no longer able to