• Explore the unified API Platform
        • BUILD APIs
        • Kong Insomnia
        • API Design
        • API Mocking
        • API Testing & Debugging
        • MCP Client
        • RUN APIs
        • API Gateway
        • Context Mesh
        • AI Gateway
        • Event Gateway
        • Kubernetes Operator
        • Service Mesh
        • Ingress Controller
        • Runtime Management
        • DISCOVER APIs
        • Developer Portal
        • Service Catalog
        • MCP Registry
        • GOVERN APIs
        • Metering & Billing
        • APIOps & Automation
        • API Observability
        • Why Kong?
      • CLOUD
      • Cloud API Gateways
      • Need a self-hosted or hybrid option?
      • COMPARE
      • Considering AI Gateway alternatives?
      • Kong vs. Postman
      • Kong vs. MuleSoft
      • Kong vs. Apigee
      • Kong vs. IBM
      • GET STARTED
      • Sign Up for Kong Konnect
      • Documentation
  • Agents
      • FOR PLATFORM TEAMS
      • Developer Platform
      • Kubernetes & Microservices
      • Observability
      • Service Mesh Connectivity
      • Kafka Event Streaming
      • FOR EXECUTIVES
      • AI Connectivity
      • Open Banking
      • Legacy Migration
      • Platform Cost Reduction
      • Kafka Cost Optimization
      • API Monetization
      • AI Monetization
      • AI FinOps
      • FOR AI TEAMS
      • AI Cost Control
      • AI Governance
      • AI Integration
      • AI Security
      • Agentic Infrastructure
      • MCP Production
      • MCP Traffic Gateway
      • FOR DEVELOPERS
      • Mobile App API Development
      • GenAI App Development
      • API Gateway for Istio
      • Decentralized Load Balancing
      • BY INDUSTRY
      • Financial Services
      • Healthcare
      • Higher Education
      • Insurance
      • Manufacturing
      • Retail
      • Software & Technology
      • Transportation
      • See all Solutions
      • DOCUMENTATION
      • Kong Konnect
      • Kong Gateway
      • Kong Mesh
      • Kong AI Gateway
      • Kong Insomnia
      • Plugin Hub
      • EXPLORE
      • Blog
      • Learning Center
      • eBooks
      • Reports
      • Demos
      • Customer Stories
      • Videos
      • EVENTS
      • AI + API Summit
      • Webinars
      • User Calls
      • Workshops
      • Meetups
      • See All Events
      • FOR DEVELOPERS
      • Get Started
      • Community
      • Certification
      • Training
      • COMPANY
      • About Us
      • Why Kong?
      • We're Hiring!
      • Press Room
      • Investors
      • Contact Us
      • PARTNER
      • Kong Partner Program
      • SECURITY
      • Trust and Compliance
      • SUPPORT
      • Enterprise Support Portal
      • Professional Services
      • Documentation
      • Press Releases

        Kong Names Bruce Felt as Chief Financial Officer

        Read More
  • Pricing
  • Login
  • Get a Demo
  • Start for Free
Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. How to Automate Deployment of Microservices With an API Gateway to a Multi-Cloud Environment
Engineering
May 20, 2020
4 min read

How to Automate Deployment of Microservices With an API Gateway to a Multi-Cloud Environment

Mike Bilodeau

In today’s enterprise computing landscape, multi-cloud organizations are quickly becoming the norm rather than the exception. By leveraging an API-first strategy with a microservice-based architecture, companies can achieve significant speed to market across multiple clouds. In order to achieve this, container orchestration and a well-designed CI/CD strategy are essential components in this journey.

In this article, we will demonstrate how to create an automated workflow for deploying microservices as well as configuring an API gateway in front of those services. We will be using Kong Gateway as our API gateway. All of these components will run inside Kubernetes and be deployed using Github Actions. We will assume that you already have Kubernetes clusters available and that you can connect to each cluster from your local development machine. You will also need a Docker Hub account so you can build and push the images for our microservices. Additionally, you will need the Kubernetes package manager, Helm, installed.

We will show you how to do the following:

  1. Create the environment for an automated workflow
  2. Modify deployment scripts
  3. Automatically trigger the build and deploy process which will run on your computer
  4. Verify Kong is running
  5. Verify the upstream service is running
  6. Secure the upstream service
  7. Make changes to the upstream service

Create the Environment for an Automated Workflow

  1. Create a new, blank Github repository using the template repository by going here and clicking the “Use this template” button. use_template
  2. Clone the repository.
  3. Under your Github project Settings tab, click on "Secrets." Then add two secrets, DOCKER_USERNAME and DOCKER_PASSWORD, with your Docker Hub account credentials.

secrets

  1. Under your Github project Settings tab, click on "Actions" and then press the “Add runner” button and follow the instructions for creating a self-hosted Github Action runner. A self-hosted Github Action runner is a program that runs on your machine and listens for repository events like push. When it receives an event, the action runs on your machine. Note: Make sure you run the commands from the Github instructions inside your-github-repo directory.runner

runner_instructions

Upon successful execution of the action-runner, you will see:

listening

Now that we have everything running, we can modify some code.

Modify Deployment Scripts

For this exercise, we are going to use a JavaScript action. In a new terminal window, cd your-github-repo, and run the following commmands which will download the necessary libraries for running JavaScript actions.

  1. npm init -y
  2. npm install @actions/core
  3. npm install @actions/github

Note: You will need NodeJS version 12.x or greater.

After the dependencies are installed, open the following files in your favorite text editor. Look for "TODO" and edit appropriately.

  1. your-github-repo/.github/actions/multi-cloud-deploy-action/helm_deploy.sh
  2. your-github-repo/.github/workflows/main.yml
  3. your-github-repo/startrek/values.yaml

Automatically Trigger a Build and Deploy That Runs on Your Computer

  1. Commit and push your changes to Github. This should trigger a build.
  2. Monitor results under your Actions tab. actions

After you commit and push your changes, Github will start the workflow by running through steps in your main workflow file. See your-github-repo/.github/workflows/main.yml. The entire workflow will run on your local machine. Following are the main steps that our workflow performs.

Main Workflow Steps
  1. Login to your Docker Hub account
  2. Build a Docker image of the upstream service and push to Docker Hub
  3. Deploy and configure Kong inside Kubernetes
  4. Pull your upstream service from Docker Hub and deploy your service to Kubernetes

Upon successful completion, you should see something similar in your terminal window that is running the self-hosted runner.

terminal

If you encounter an error deploying, please check the Github Action tab in your Github repository control panel.

Verify Kong is Running

kubectl get pods -n kong-ce

You should see output similar to the following:

NAME                         READY   STATUS    RESTARTS   AGE
blog-kong-85d9dfc685-b72fz   2/2     Running   0          4d5h
startrek-7775df87bf-z6szr    1/1     Running   0          2d6h

Now that our project has been deployed successfully, we are free to make changes to both our services as well as the Kong configuration. First, we need the external host of your Kubernetes cluster.

Execute this command:

kubectl get svc -n kong-ce

You should see output similar to the following:

NAME              TYPE           CLUSTER-IP       EXTERNAL-IP        PORT(S)                      AGE
blog-kong-proxy   LoadBalancer   10.100.103.159   some-external-ip   80:30474/TCP,443:30084/TCP   46h
startrek          ClusterIP      10.100.133.50    <none>             5001 TCP                     46h

Verify upstream service is running

Copy the EXTERNAL-IP from the blog-kong-proxy record and execute the following:

http http://your-external-host/startrek/ships host:startrek.com

Note: We are using the httpie command line client. See https://httpie.org for installation instructions.

You should see similar output:

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 173
Content-Type: application/json
Date: Fri, 13 Mar 2020 14:18:33 GMT
Server: Werkzeug/1.0.0 Python/3.6.2
Via: kong/2.0.2
X-Kong-Proxy-Latency: 0
X-Kong-Upstream-Latency: 3
{
    "ships": [
        {
            "id": "NCC-1701",
            "name": "USS Enterprise"
        },
        {
            "id": "NCC-1764",
            "name": "USS Defiant"
        },
        {
            "id": "NCC-1031",
            "name": "USS Discovery"
        },
        {
            "id": "NCC-1864",
            "name": "USS Reliant"
        }
    ]

Secure the Upstream Service

Now, let’s make some changes to the Kong gateway to enable some authentication, so we can secure our startrek service. Create a new file called security.yaml inside of your your-github-repo/startrek/templates directory and then paste the below contents. Then, uncomment line 44, # plugins.konghq.com: startrek-auth in your-github-repo/startrek/values.yaml. Save your changes, and then commit and push.

# security.yaml contents herez

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: startrek-auth
  annotations:
    kubernetes.io/ingress.class: kong-public
plugin: key-auth
---
apiVersion: v1
data:
  key: MTIzNDU=
  kongCredType: a2V5LWF1dGg=
kind: Secret
metadata:
  name: myapp-apikey
  annotations:
    kubernetes.io/ingress.class: kong-public
type: Opaque
---
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: myapp
  annotations:
    kubernetes.io/ingress.class: kong-public
username: myapp
credentials:
- myapp-apikey

After that is finished deploying, execute http http://your-external-host/startrek/ships host:startrek.com again, and you should see the below output.

HTTP/1.1 401 Unauthorized
Connection: keep-alive
Content-Length: 41
Content-Type: application/json; charset=utf-8
Date: Fri, 13 Mar 2020 14:31:17 GMT
Server: kong/2.0.2
WWW-Authenticate: Key realm="kong"
X-Kong-Response-Latency: 1 { "message": "No API key found in request" }

Add the API key like this, and you should see successful results.

http http://your-external-host/startrek/ships host:startrek.com apikey:12345

Make Changes to Upstream Service

Feel free to make changes to the startrek service code in your-github-repo/services/startrek/app.py. Commit and push, and your application code should reflect your changes. When you make changes to your application, the Github Action builds a Docker image and pushes it to your Docker Hub account, see your-github-repo/.github/workflows/main.yml. Login to your account to see the versioned images.

hub

For this exercise, we used the Kong Community Edition. Kong Enterprise provides additional management and security benefits for enterprise organizations like support for OIDC authentication, Mutual TLS, Vault integration and more. It also includes an out of the box Dev Portal for making your APIs discoverable throughout your organization.

Thank you for taking the time to read through this post. Hopefully, you have found this exercise useful. By no means is this a complete CI/CD solution, but it is a starting point and hopefully gets the creativity flowing for some good ideas within your organization.

API GatewayAutomationMulti Cloud

More on this topic

Videos

Automating API Gateways on Amazon Linux 2

Videos

Implementing a Reliable, Auto-Healing Scalable Platform at VMware

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
API GatewayAutomationMulti Cloud
Mike Bilodeau

Recommended posts

Configuring Kong Dedicated Cloud Gateways with Managed Redis in a Multi-Cloud Environment

EngineeringMarch 12, 2026

Architecture Overview A multicloud DCGW architecture typically contains three main layers. 1\. Konnect Control Plane The SaaS control plane manages configuration, plugins, and policies. All gateways connect securely to this layer. 2\. Dedicated C

Hugo Guerrero

Kong Simplifies Multicloud Cloud Gateways with Managed Redis Cache

Product ReleasesMarch 12, 2026

Managed Redis cache is a turnkey "Shared State" add-on for Kong Dedicated Cloud Gateways. It is designed to combine the performance of an in-memory data store with the simplicity of a SaaS product. When you spin up a Dedicated Cloud Gateway in Kong

Amit Shah

Terraform Your Way to the Cloud with Konnect Dedicated Cloud Gateways

EngineeringApril 16, 2025

Automate Everything: Kong Gateway + API Management with Terraform Across Any Cloud Too many organizations manually manage their API gateways and policy enforcement today. As humans, we make mistakes. You’ve got one team manually configuring Kong or

Declan Keane

Deploying a Multi-Cloud API Gateway on AWS and GCP

EngineeringNovember 16, 2021

After you've built your microservices -backed application, it's time to deploy and connect them. Luckily, there are many cloud providers to choose from, and you can even mix and match. Many organizations, like Australia Post , are taking the mix-a

Claudio Acquaviva

Building Secure AI Agents with Kong's MCP Proxy and Volcano SDK

EngineeringJanuary 27, 2026

The example below shows how an AI agent can be built using Volcano SDK with minimal code, while still interacting with backend services in a controlled way. The agent is created by first configuring an LLM, then defining an MCP (Model Context Prot

Eugene Tan

Kong's Dedicated Cloud Gateways: A Deep Dive

Product ReleasesJune 18, 2025

Why switch to Dedicated Cloud Gateways? Well, the drivers for moving to managed cloud gateways are simple enough, as they mirror all the reasons behind why you would want to move any workload to the cloud. By choosing a DCGW, you benefit from faster

Michael Field

Announcing Kong GCP Cloud Gateways (Beta)

Product ReleasesFebruary 18, 2025

Multi-cloud API gateways, without hosting a thing yourself When speaking with platform teams who are responsible for setting up their organizations’ API platforms, we often hear: "We really want to offload the infra side of this. We don’t want our e

Michael Field

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026