• Explore the unified API Platform
        • BUILD APIs
        • Kong Insomnia
        • API Design
        • API Mocking
        • API Testing & Debugging
        • MCP Client
        • RUN APIs
        • API Gateway
        • Context Mesh
        • AI Gateway
        • Event Gateway
        • Kubernetes Operator
        • Service Mesh
        • Ingress Controller
        • Runtime Management
        • DISCOVER APIs
        • Developer Portal
        • Service Catalog
        • MCP Registry
        • GOVERN APIs
        • Metering & Billing
        • Analytics
        • APIOps & Automation
        • API Observability
        • Why Kong?
      • CLOUD
      • Cloud API Gateways
      • Need a self-hosted or hybrid option?
      • COMPARE
      • Considering AI Gateway alternatives?
      • Kong vs. Postman
      • Kong vs. MuleSoft
      • Kong vs. Apigee
      • Kong vs. IBM
      • GET STARTED
      • Sign Up for Kong Konnect
      • Documentation
  • Agents
      • FOR PLATFORM TEAMS
      • Developer Platform
      • Kubernetes & Microservices
      • Observability
      • Service Mesh Connectivity
      • Kafka Event Streaming
      • FOR EXECUTIVES
      • AI Connectivity
      • Open Banking
      • Legacy Migration
      • Platform Cost Reduction
      • Kafka Cost Optimization
      • API Monetization
      • AI Monetization
      • AI FinOps
      • FOR AI TEAMS
      • AI Cost Control
      • AI Governance
      • AI Integration
      • AI Security
      • Agentic Infrastructure
      • MCP Production
      • MCP Traffic Gateway
      • FOR DEVELOPERS
      • Mobile App API Development
      • GenAI App Development
      • API Gateway for Istio
      • Decentralized Load Balancing
      • BY INDUSTRY
      • Financial Services
      • Healthcare
      • Higher Education
      • Insurance
      • Manufacturing
      • Retail
      • Software & Technology
      • Transportation
      • See all Solutions
      • DOCUMENTATION
      • Kong Konnect
      • Kong Gateway
      • Kong Mesh
      • Kong AI Gateway
      • Kong Insomnia
      • Plugin Hub
      • EXPLORE
      • Blog
      • Learning Center
      • eBooks
      • Reports
      • Demos
      • Customer Stories
      • Videos
      • EVENTS
      • AI + API Summit
      • Webinars
      • User Calls
      • Workshops
      • Meetups
      • See All Events
      • FOR DEVELOPERS
      • Get Started
      • Community
      • Certification
      • Training
      • COMPANY
      • About Us
      • Why Kong?
      • We're Hiring!
      • Press Room
      • Investors
      • Contact Us
      • PARTNER
      • Kong Partner Program
      • SECURITY
      • Trust and Compliance
      • SUPPORT
      • Enterprise Support Portal
      • Professional Services
      • Documentation
      • Press Releases

        Kong Names Bruce Felt as Chief Financial Officer

        Read More
  • Pricing
  • Login
  • Get a Demo
  • Start for Free
Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Moving an Application from VM to Kubernetes
Engineering
May 19, 2021
7 min read

Moving an Application from VM to Kubernetes

Michael Heap
Sr Director Developer Experience, Kong

Containerization and orchestration are becoming increasingly popular. According to a recent survey conducted by Market Watch, the global container market will exceed $5 billion by 2026. In 2019, that number was under 1 billion. These statistics show that the world is moving more towards containers and orchestration faster and faster each day. One example of this is moving from VM to Kubernetes.

For companies like Papa John's, Kong Gateway supports connectivity across any infrastructure. With Kong, you can keep your apps connected throughout a transition from VM to Kubernetes. After moving the app to Kubernetes, you can manage the Kubernetes ingress with the Kong Ingress Controller. The Kong Ingress Controller delivers API management, ingress security and service mesh configurability.

In this tutorial, you’ll learn a prevalent scenario: moving an app from a VM to a container and running the container in Kubernetes. We'll cover how to containerize your app, no matter which platform hosts it—even if it's on-prem.

Kubernetes Ingress Management Workshop

Building the Application on VM

For this scenario, you will build a small server using Python's Flask framework. When this server receives a request, it will return a response that says, “Hello World!” The focus of this post isn't on Python or the application. We'll be using it as an example of how to make an existing VM-based app work in a container.

First, within your project's directory, create a subfolder called app. Inside this subfolder, create a file called app.py which:

  • Imports the Flask library
  • Initializes the Flask constructor and stores it in a variable called app
  • Configures a single route that maps root ("/") to a function called home()
  • Defines home() to respond with a greeting
  • Calls run() to start the server

The server code looks like this:

from flask import Flask

app = Flask(__name__)

@app.route("/hello")
def hello():
    return 'Hello World!'

app.run(host='0.0.0.0', port=5000)

Your container will also need to install the server's dependencies, which in this case, is just Flask. To do that, we list Flask as a dependency in a file called requirements.txt. The file will be inside of the same directory as the app.py file:

Flask

Creating a Dockerfile and Docker Image for the Application

Docker is one of the most popular tools for setting up a containerized image. Docker itself is just a platform to run something called a Dockerfile. A Dockerfile is a list of commands that define how your container should be initialized, configured and run. The Dockerfile creates a container image.

First, create an empty text file in your project directory called Dockerfile—just like that, no extension. The folder structure should look like the screenshot below:

Docerfile folder structure

Every Dockerfile starts with a line that imports a base image. Base images come pre-installed with various niceties. You can probably find an image to begin with on the Docker Hub, regardless of your programming language or preferred operating system.

The Dockerfile to containerize the Flask server will use the latest version of Python. The line to define that looks like this:

# Use the latest Python image 
FROM python:latest

That's all we need for the initialization step; we can move on to the configuration. Next, we want to create a directory where our server files can go. We also need to install the dependencies from requirements.txt, just as we did on our local machine. Adding the following lines will perform these steps:

# Create and specify a working directory for the application
RUN mkdir /build
WORKDIR /build


# Copy the "app" directory into the working directory
COPY app/ /build

# Copy the requirements
COPY app/requirements.txt /build

# Install the requirements
RUN pip3 install -r requirements.txt

In this Dockerfile, copy the files from the app subfolder and place them in a directory in the Docker image called "build." We are then installing the dependencies using pip, which is Python's package manager.

Last, we need to run the server. We can do so by ending the file with the following lines:

# Run the Flask web API at start-up of the container
CMD ["python", "app.py"]

Building the Docker Image

To build the Docker image, open a terminal to the directory that contains the “app” directory and the Dockerfile. Run the following command to build the Docker image:

~$ docker build -t flaskwebapi .

The -t option provides a name, which we can use to refer to the image in the future. After you run the command above, you should see an output in the terminal similar to the screenshot below.

Docker image command output

To check and confirm the successful creation of the Docker image, run:

docker image ls

The output should be a table similar to the one below, showing information about the image like name and creation date:

Docker image run output table

Finally, let's run the image locally to verify that the server works. In the terminal, enter the following command:

docker run -p 5000:5000 flaskwebapi

Doing so runs the flaskwebapi Docker image we just created and exposes its port 5000 to be accessible from the host machine. When the image is running, open a browser window to http://localhost:5000/hello —you should see a classic “Hello, World!”

Getting the Application Onto a Kubernetes Cluster

Now that we created the Docker image and the server works, it’s time to start thinking about getting the application running on Kubernetes.

The first thing you will need to do is create a Kubernetes cluster. There are many, many platforms out there that offer Kubernetes support, such as Amazon's Elastic Kubernetes Service (EKS) or Google's Kubernetes Engine. These services provide all sorts of configurability on region, resource scaling and more. However, these environments are a bit advanced for this initial attempt at containerizing an app. For this post, we'll use minikube, which provides a system to run Kubernetes locally.

Installing minikube is a breeze. When it's finished, set up your local environment variables by running eval $(minikube docker-env). After that, type minikube start in the terminal. It'll take some time to download and set up Kubernetes, so feel free to hydrate in the meantime. At the end of it all, you should receive this message:

kubectl is now configured to use "minikube" cluster and "default" namespace by default

Now, all we should need to do is deploy our image to minikube.

However, before doing that, we need to rebuild our image. Why? Well, we built our image before minikube was installed and configured. Minikube is only able to track local images after configuration. To do so, you can rebuild the image with the same command as before:

docker build -t flaskwebapi .

There's one more addition we need to make. Every resource in Kubernetes requires a manifest. A deployment counts as a resource because it describes how an image should run. The manifest has a thoroughly documented set of key-value pairs. We won't cover this here today. In this tutorial, we can just copy and paste the one provided below, and minikube will create a deployment:

echo "kind: Service
apiVersion: v1
metadata:
  name: flaskdemo
spec:
  selector:
app: flaskdemo
  ports:
  - port: 5000
targetPort: 5000
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: flaskdemo
spec:
  replicas: 1
  selector:
matchLabels:
  app: flaskdemo
  template:
metadata:
  labels:
    app: flaskdemo
spec:
  containers:
  - image: flaskwebapi
    name: flaskwebapi
    imagePullPolicy: Never
    ports:
    - containerPort: 5000
    resources: {}
" | kubectl apply -f -

Did it work? minikube comes with a dashboard to show you how your container fleet is performing. Of course, it's a bit much for this little app, but it can be helpful to get acquainted with just how much Kubernetes can do. Run minikube dashboard, and check out the accompanying metrics.

minikube metrics dashboard

Setting Up the Kong Ingress Controller

With the Flask web app now deployed to minikube, we can continue exploring more Kubernetes best practices locally. That way, we’ll prepare ourselves for when we need to move from VM to Kubernetes in production.

Our next exploration will involve security via ingress. Ingress is like a souped-up firewall, and it allows you to control precisely which HTTP calls are permissible in and out of your cluster.

Kubernetes services are relatively low-level processing, including setting up an ingress controller. Tools like Kong exist to make managing these services much easier.

To get Kong's Ingress Controller running on minikube, we'll need to deploy it to our cluster, just as we did with our image. minikube lets you create deployments via manifests on the web to simplify the deployment process. Run the following command locally to get started:

kubectl create -f https://bit.ly/k4k8s

Let's verify that this all went through:

kubectl get services -n kong

You should see some metadata, including names, ports and IP addresses like this:

NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP
kong-proxy                LoadBalancer   10.100.78.79    <pending>
kong-validation-webhook   ClusterIP      10.111.65.213   <none>

The connection information is specific to the cluster itself. If your external IP is pending, as it is here, that means we need to expose it to our local machine so that we can connect to the ingress service. We can do that with the tunnel (https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel) command. In a separate terminal window, run minikube tunnel.

Finally, we can get the hostname and port to connect to this ingress controller using the service (https://minikube.sigs.k8s.io/docs/commands/service/) command:

minikube service -n kong kong-proxy --url

You should get back a URL like http://127.0.0.1:64035/. The Kong Ingress Controller is now set up and ready to use with your environment.

The final thing to do is create an ingress configuration for our Flask API. Much like with our deployment, we need to define a manifest here. In our example, we'll just expose the Flask server and make it available for external requests:

echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: flask
  annotations:
kubernetes.io/ingress.class: kong
spec:
  rules:
  - http:
  paths:
  - path: /hello
    backend:
      serviceName: flaskdemo
      servicePort: 5000
" | kubectl apply -f -

If you call curl http://127.0.0.1:<port>/hello you should see it return "Hello World."

The ingress manifest is the final arbiter of all the traffic that goes into (and between) your Kubernetes nodes. The rules key, in particular, defines permitted protocols and paths. It also defines who the backend server controlling the response should be. The key is handy if you designed your app as a set of microservices, but you want to expose a single gateway to clients outside the network.

The Path to Becoming Kubernetes Native

Getting an application to run is rarely an easy task, and deploying it onto a platform like Kubernetes for the first time can seem like a lot of effort. After going through the process several times, the sequence generally remains the same. No matter if you're testing Kubernetes locally, on bare metal or in the cloud.

After moving from VM to Kubernetes, managing your Kubernetes environment comes with its own challenges. That’s where services like Kong come in to make management easier, more reliable and faster with a single operating environment for containers, microservices and APIs.

Kubernetes Ingress Management Workshop

If you have any additional questions, post them on Kong Nation. To stay in touch, join the Kong Community.

Now that you've successfully moved an application from VM to Kubernetes, you may find these other tutorials helpful:

  • Using Kong Kubernetes Ingress Controller as an API Gateway
  • Implement a Canary Release with Kong for Kubernetes and Consul
  • Observability for Your Kubernetes Microservices Using Kuma and Prometheus
KubernetesApplicationsAPI Development

More on this topic

Videos

Local Kubernetes Development with Kong, minikube, and kind

Videos

Taking the Leap: Seamlessly Transition Legacy Applications to Kubernetes

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
KubernetesApplicationsAPI Development
Michael Heap
Sr Director Developer Experience, Kong

Recommended posts

Unpacking Distributed Applications: What Are They? And How Do They Work?

EngineeringMarch 19, 2024

Distributed architectures have become an integral part of modern digital landscape. With the proliferation of cloud computing, big data, and highly available systems, traditional monolithic architectures have given way to more distributed, scalable,

Paul Vergilis

Getting Started With Kong Istio Gateway on Kubernetes With Kiali for Observability 

EngineeringOctober 29, 2021

Have you ever found yourself in a situation where all your service mesh services are running in Kubernetes, and now you need to expose them to the outside world securely and reliably? Ingress management is essential for your configuration and ope

Viktor Gamov

Managing Docker Apps With Kubernetes Ingress Controller

EngineeringJuly 6, 2021

Think back to when your development team made the switch to Dockerized containers. What was once an application requiring multiple services on virtual machines transitioned to an application consisting of multiple, tidy Docker containers. While the

Alvin Lee

Configuring a Kubernetes Application on Kong Konnect

EngineeringJuly 2, 2021

Hello, everyone! Viktor Gamov, a developer advocate with Kong here. In this article, I would like to show you how to set up service connectivity using Kong Konnect and Kubernetes . I will deploy an application in Kubernetes, configure a runtime t

Viktor Gamov

Implement a Canary Release with Kong for Kubernetes and Consul

EngineeringNovember 20, 2020

From the Kong API Gateway perspective, using Consul as its Service Discovery infrastructure is one of the most well-known and common integration use cases. With this powerful combination more flexible and advanced routing policies can be implemented

Kong

How to Use Kong Gateway With K3s For IoT and Edge Computing on Kubernetes

EngineeringJuly 1, 2020

Once upon a time, we had these giant structures where thousands of people would congregate to share ideas, pamphlets filled to the margins with buzz words and cheap, branded t-shirts. Yep, tech conferences - oh what a relic of the past that I miss.

Kevin Chen

Kubernetes Canary Deployment in 5 Minutes

EngineeringDecember 17, 2019

Welcome to our second hands-on Kuma guide! The first one walked you through securing your application with mTLS using Kuma. Today, this guide will walk you through Kuma's new L4 traffic routing rules. These rules will allow you to easily impleme

Kevin Chen

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026