How to Automate Deployment of Microservices With an API Gateway to a Multi-Cloud Environment
Mike Bilodeau
In today’s enterprise computing landscape, multi-cloud organizations are quickly becoming the norm rather than the exception. By leveraging an API-first strategy with a microservice-based architecture, companies can achieve significant speed to market across multiple clouds. In order to achieve this, container orchestration and a well-designed CI/CD strategy are essential components in this journey.
In this article, we will demonstrate how to create an automated workflow for deploying microservices as well as configuring an API gateway in front of those services. We will be using Kong Gateway as our API gateway. All of these components will run inside Kubernetes and be deployed using Github Actions. We will assume that you already have Kubernetes clusters available and that you can connect to each cluster from your local development machine. You will also need a Docker Hub account so you can build and push the images for our microservices. Additionally, you will need the Kubernetes package manager, Helm, installed.
We will show you how to do the following:
Create the environment for an automated workflow
Modify deployment scripts
Automatically trigger the build and deploy process which will run on your computer
Verify Kong is running
Verify the upstream service is running
Secure the upstream service
Make changes to the upstream service
Create the Environment for an Automated Workflow
Create a new, blank Github repository using the template repository by going here and clicking the “Use this template” button.
Clone the repository.
Under your Github project Settings tab, click on "Secrets." Then add two secrets, DOCKER_USERNAME and DOCKER_PASSWORD, with your Docker Hub account credentials.
Under your Github project Settings tab, click on "Actions" and then press the “Add runner” button and follow the instructions for creating a self-hosted Github Action runner. A self-hosted Github Action runner is a program that runs on your machine and listens for repository events like push. When it receives an event, the action runs on your machine. Note: Make sure you run the commands from the Github instructions inside your-github-repo directory.
Upon successful execution of the action-runner, you will see:
Now that we have everything running, we can modify some code.
Modify Deployment Scripts
For this exercise, we are going to use a JavaScript action. In a new terminal window, cd your-github-repo, and run the following commmands which will download the necessary libraries for running JavaScript actions.
npm init -y
npm install @actions/core
npm install @actions/github
Note: You will need NodeJS version 12.x or greater.
After the dependencies are installed, open the following files in your favorite text editor. Look for "TODO" and edit appropriately.
Automatically Trigger a Build and Deploy That Runs on Your Computer
Commit and push your changes to Github. This should trigger a build.
Monitor results under your Actions tab.
After you commit and push your changes, Github will start the workflow by running through steps in your main workflow file. See your-github-repo/.github/workflows/main.yml. The entire workflow will run on your local machine. Following are the main steps that our workflow performs.
Main Workflow Steps
Login to your Docker Hub account
Build a Docker image of the upstream service and push to Docker Hub
Deploy and configure Kong inside Kubernetes
Pull your upstream service from Docker Hub and deploy your service to Kubernetes
Upon successful completion, you should see something similar in your terminal window that is running the self-hosted runner.
If you encounter an error deploying, please check the Github Action tab in your Github repository control panel.
Verify Kong is Running
kubectl get pods -n kong-ce
You should see output similar to the following:
NAME READY STATUS RESTARTS AGE
blog-kong-85d9dfc685-b72fz 2/2 Running 0 4d5h
startrek-7775df87bf-z6szr 1/1 Running 0 2d6h
Now that our project has been deployed successfully, we are free to make changes to both our services as well as the Kong configuration. First, we need the external host of your Kubernetes cluster.
Execute this command:
kubectl get svc -n kong-ce
You should see output similar to the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blog-kong-proxy LoadBalancer 10.100.103.159 some-external-ip 80:30474/TCP,443:30084/TCP 46h
startrek ClusterIP 10.100.133.50 <none> 5001 TCP 46h
Verify upstream service is running
Copy the EXTERNAL-IP from the blog-kong-proxy record and execute the following:
Note: We are using the httpie command line client. See https://httpie.org for installation instructions.
You should see similar output:
HTTP/1.1200 OK
Connection: keep-alive
Content-Length:173Content-Type: application/json
Date: Fri,13 Mar 202014:18:33 GMT
Server: Werkzeug/1.0.0 Python/3.6.2Via: kong/2.0.2X-Kong-Proxy-Latency:0X-Kong-Upstream-Latency:3{"ships":[{"id":"NCC-1701","name":"USS Enterprise"},{"id":"NCC-1764","name":"USS Defiant"},{"id":"NCC-1031","name":"USS Discovery"},{"id":"NCC-1864","name":"USS Reliant"}]
Secure the Upstream Service
Now, let’s make some changes to the Kong gateway to enable some authentication, so we can secure our startrek service. Create a new file called security.yaml inside of your your-github-repo/startrek/templates directory and then paste the below contents. Then, uncomment line 44, # plugins.konghq.com: startrek-auth in your-github-repo/startrek/values.yaml. Save your changes, and then commit and push.
After that is finished deploying, execute http http://your-external-host/startrek/ships host:startrek.com again, and you should see the below output.
HTTP/1.1401 Unauthorized
Connection: keep-alive
Content-Length:41Content-Type: application/json; charset=utf-8Date: Fri,13 Mar 202014:31:17 GMT
Server: kong/2.0.2WWW-Authenticate: Key realm="kong"X-Kong-Response-Latency:1{"message":"No API key found in request"}
Add the API key like this, and you should see successful results.
Feel free to make changes to the startrek service code in your-github-repo/services/startrek/app.py. Commit and push, and your application code should reflect your changes. When you make changes to your application, the Github Action builds a Docker image and pushes it to your Docker Hub account, see your-github-repo/.github/workflows/main.yml. Login to your account to see the versioned images.
For this exercise, we used the Kong Community Edition. Kong Enterprise provides additional management and security benefits for enterprise organizations like support for OIDC authentication, Mutual TLS, Vault integration and more. It also includes an out of the box Dev Portal for making your APIs discoverable throughout your organization.
Thank you for taking the time to read through this post. Hopefully, you have found this exercise useful. By no means is this a complete CI/CD solution, but it is a starting point and hopefully gets the creativity flowing for some good ideas within your organization.
Architecture Overview
A multicloud DCGW architecture typically contains three main layers.
1\. Konnect Control Plane
The SaaS control plane manages configuration, plugins, and policies. All gateways connect securely to this layer.
2\. Dedicated C
Hugo Guerrero
Kong Simplifies Multicloud Cloud Gateways with Managed Redis Cache
Managed Redis cache is a turnkey "Shared State" add-on for Kong Dedicated Cloud Gateways. It is designed to combine the performance of an in-memory data store with the simplicity of a SaaS product. When you spin up a Dedicated Cloud Gateway in Kong
Amit Shah
Terraform Your Way to the Cloud with Konnect Dedicated Cloud Gateways
Automate Everything: Kong Gateway + API Management with Terraform Across Any Cloud Too many organizations manually manage their API gateways and policy enforcement today. As humans, we make mistakes. You’ve got one team manually configuring Kong or
Declan Keane
Deploying a Multi-Cloud API Gateway on AWS and GCP
After you've built your microservices -backed application, it's time to deploy and connect them. Luckily, there are many cloud providers to choose from, and you can even mix and match. Many organizations, like Australia Post , are taking the mix-a
Claudio Acquaviva
Building Secure AI Agents with Kong's MCP Proxy and Volcano SDK
The example below shows how an AI agent can be built using Volcano SDK with minimal code, while still interacting with backend services in a controlled way. The agent is created by first configuring an LLM, then defining an MCP (Model Context Prot
Why switch to Dedicated Cloud Gateways?
Well, the drivers for moving to managed cloud gateways are simple enough, as they mirror all the reasons behind why you would want to move any workload to the cloud. By choosing a DCGW, you benefit from faster
Multi-cloud API gateways, without hosting a thing yourself When speaking with platform teams who are responsible for setting up their organizations’ API platforms, we often hear: "We really want to offload the infra side of this. We don’t want our e
Michael Field
Ready to see Kong in action?
Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.