How an API Gateway Secures APIs 
Kat_morgan

By on March 15, 2022

Kong API Gateway on Kubernetes with Pulumi

The Kong Laboratory – Kong API Gateway

The quest for resilience and agility has driven us into the modern age of microservices. Bringing services to market on a microservice architecture demands the utilization of sprawling technology offerings and tooling. While daunting at first glance, we can break down the process into 3 major categories:

  • Infrastructure Orchestration
  • Application Deployment
  • Service Publishing

In this hands-on series, we will use:

  • Kubernetes-in-Docker (Kind) as our infrastructure platform.
  • Pulumi to orchestrate our supporting infrastructure and deploy our applications.
  • And finally, Kong API Gateway for publishing the services that we have deployed.

Key Concepts

Kong API Gateway is an API Gateway and Ingress Controller. At its core, Kong is a reverse proxy that allows an organization to offer APIs as a product to internal and external clients via a centralized ingress point. An API Gateway truly begins to shine when leveraged to consolidate capabilities such as authentication, RBAC, session handling, rate limiting, request & response transformation, redirection, load balancing, traffic monitoring, and logging. These advanced routing features offload enforcement, maintenance, and visibility from the application teams, improving their agility and consolidating this functional ownership into a central location for improved global consistency and visibility.

Pulumi is an Infrastructure as Code (IaC) or Infrastructure as Software (IaS) cloud engineering platform. Pulumi supports IaC/IaS patterns using popular programming languages including Python, JavaScript, TypeScript, Golang, and .NET/C#. At it’s heart, the Pulumi ecosystem is a cloud engineering platform and SDK offering that brings together developer, operations, and security teams through a unified software engineering process to accelerate innovation with more confidence via a full suite of OpenGitOps compliant tools.


Host Setup

This article is designed for you to follow along with your MacOS or Linux laptop. Before starting, please check that you have installed all dependencies.

Okay, now that you have your dependencies, let’s grab the code and get your system ready to build the lab platform.

  1. Write Hosts File Entries to resolve your lab domain names locally
  1. Create docker volumes for persistent local container image caching
  1. Clone TheKongLaboratory git repo

Pulumi Infrastructure as Code

Great! Reviewing our checklist, we now have:

  • ✓ Installed all dependencies.
  • ✓ Configured /etc/hosts to resolve our domain names to our local IP.
  • ✓ Created local cache volumes for kind node images.
  • ✓ Cloned the demo repo codebase.

Your system is ready to run the lab and we have the code! Next, before we can deploy the Kong API Gateway we need to initialize the Pulumi codebase and configure a Stack.

  1. Configure Pulumi local state provider
  1. Initialize & Select Pulumi Stack
  1. Set Pulumi Stack Configuration Variables

Deploy Kong API Gateway

Reviewing our checklist again, we now have:

  • ✓ Installed all dependencies.
  • ✓ Configured /etc/hosts to resolve our domain names to our local IP.
  • ✓ Created local cache volumes for kind node images.
  • ✓ Cloned the demo repo codebase.
  • ✓ Initialized & Configured your Pulumi Stack

Now, it is time to start your Kind cluster and deploy Kong to it!

  1. Deploy Kong Gateway Stack
  1. Go ahead and open up the Kong Manager UI !!
    >> https://manager.kong.kind.home.arpa/

Deploy a Sample App

Let’s go ahead and test our new Kong API Gateway by deploying Podinfo as a sample application to experiment with.

  1. Deploy a simple Podinfo Sample application.
  1. Now go check out your Podinfo app at:
    >> https://podinfo.apps.kind.home.arpa/

Conclusion

Congratulations! In roughly 1000 lines of TypeScript code, we have deployed a working Kong API Gateway and all supporting services with Pulumi! For transparency, I want to briefly list the scope of what you just deployed.

Now that you have Kong installed and ready to use, this will be the foundation for future posts in the DevMyOps series and is also a great way to get started with Kong for evaluation and local development purposes.

From here you can continue with configuring Kong Manager and Kong plugins, or you can start using the Kong Ingress Controller to publish services on your kind cluster via Kong.

Appendix

Dependencies

DependencyInstallation Docs

kubectl

Linux / Mac

Docker

Linux / Mac

Kind

Linux / Mac

Helm

Linux / Mac

Pulumi

Linux / Mac

npm

Linux / Mac

git client

Linux / Mac

curl client

Linux / Mac


Cleanup

When you are finished with your local deployment you can clean up all lab artifacts in this order:

  1. Destroy Kong Pulumi Stack
  2. Delete Kind Cluster
  3. Remove Docker Volumes
  4. Remove TheKongLaboratory Git Repo
  5. Manually cleanup /etc/hosts entries
  1. Unlock your local secret store.
  1. Destroy Kong Pulumi Stack
  1. Delete Kind Cluster
  1. Remove Docker Volumes
  1. Remove TheKongLaboratory Git Repo
  1. Open the /etc/hosts file and remove the following entries:

Share Post