Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Enterprise
  4. How to Develop a Cloud Native Infrastructure
Enterprise
September 7, 2021
6 min read

How to Develop a Cloud Native Infrastructure

Garen Torikian

More and more companies are eager to move their operations to the cloud. Yet, there's quite a bit of ambiguity on what moving to the cloud actually means. Is your business running in the cloud while you host your database on another platform or while you rely on a third-party service to handle your payments? That's a good start for moving to the cloud, but there are many other aspects to consider when building a cloud native infrastructure.

Embracing a cloud native infrastructure requires you to rethink how to build, deploy and run your software.


The Cloud Native Computing Foundation (CNCF) defines a cloud native infrastructure as one with "[c]ontainers, service meshes, microservices, immutable infrastructure, and declarative APIs" as typical characteristics.

This begins with isolating services within your application and moving towards decoupled and secure microservices. Further, a cloud native infrastructure requires automation through continuous delivery. Lastly, your ability to leverage containerization will go a long way toward moving you to the cloud. In this post, we'll take a closer look at these key features of a cloud native infrastructure.

Beginning With Service Isolation

When it comes to architecting an app, there are often two competing organizational methods: monoliths and microservices. A monolith is a form of organizing software in a single codebase; every feature and aspect of your app is self-contained. Designing microservices involves setting up separate, smaller applications. Each small app controls a single aspect of your app's functionality.

Transitioning From a Monolith to Microservices

Cloud native infrastructure doesn't need to be a collection of microservices. But sooner or later, you might find yourself needing to scale out one part of your application. Monoliths are easy to get started with, but after a certain growth stage, microservices become easier to administer and configure.

If you change one microservice, the worst-case scenario would be taking down a single feature. In a monolith, a change could impact the entire application.


This almost forces you to think about application design in different terms—redundancy and message queuing, anticipating and recovering from errors—aspects which a monolithic application obscures. In other words, the failures on one service do not affect any other that makes up your application.


Learn more about the process for transitioning from a monolithic to a microservices-based architecture. Download this eBook >>

Facilitating Communication Between Microservices

Microservices communicate by passing messages and data through APIs. We don't typically expose those APIs to the public. Nonetheless, you could set up an API gateway through Kong Konnect to help route external requests to internal services.

An API gateway provides a "big picture" overview of how services interact with one another and the outside world.


The communication protocol used in a cloud-native infrastructure doesn’t matter much. You could use REST, gRPC, GraphQL or anything else. What does matter is using a consistent standard and ensuring all commands pass through these channels.

Keeping Security in Mind

Isolating your app's functionality into microservices provides other security benefits as well. Earlier, I mentioned how buggy code would only affect one microservice. The same is true for larger problems, such as security issues. If an external agent gains control of one server, they're less likely to hop to another server due to network isolation rules. You can apply these rules across your entire infrastructure and manage them from one location.

In a true cloud native infrastructure, even your database acts as a microservice. It prevents storing user data locally. Kong Mesh and the CNCF's open source Kuma grant observability of traffic rules, logs and permissions to maintain the strength of your cluster.


How can a service mesh help you achieve bank-grade security for microservices? Find out in this eBook >>

Automating With Continuous Delivery

Deployment and delivery for a monolithic application can be cumbersome, brittle and time-consuming. Moving toward a cloud native infrastructure means moving toward a DevOps approach. That means automating all aspects of application delivery.

Independently Updated Microservices

With your services separated, you can now update each one independently. Here again, is another advantage of microservices over the monolith. A monolithic app might have thousands of tests that take time to complete. The deployment might also take a long time, as services restart and dependencies update. With microservices, your application separates into individual code repositories. Updating one aspect of your app is much faster because everything is scoped and isolated. Your test suites are smaller, and language dependencies are fewer.

Infrastructure-as-Code

You've likely already embraced a DevOps mindset, which is a requirement for a cloud native application. In this setup, all aspects of infrastructure management—deploying the code, load balancing your servers, scaling your resources, etc.—are performed with scripts. This turns your processes into testable, versionable pieces of code, and it helps ensure that applications are modified and upgraded consistently.

Interconnected Scaling and Monitoring

For a cloud native app, all of your datastores, servers and other resources should be able to expand (or reduce) at any moment. If you run out of disk space on one microservice, the process of enlarging the disk should be repeatable for any other microservice, too. This will lead to a situation where infrastructure management could even tie into your monitoring software. As your monitoring solution detects the need for more resources, it alerts your infrastructure management system to scale up the necessary resources.

For example, imagine a situation where your monitoring system sees increasing traffic for a certain service. With scripts in place for automated processes, the monitoring alert can trigger a service deployment to additional nodes to handle the uptick of activity. Or perhaps an erroring incident occurs on a service. The monitoring system can trigger the process for locking down that service for troubleshooting while spinning up a separate service to handle subsequent requests.

All of this is possible because each of your microservices focuses on one task. You can modify the behavior of each microservice through API calls.

Leveraging Containerization

By understanding the above requirements, containerization becomes a more reliable approach for setting up and running your app. Running your application in a container allows you to build software with greater consistency, elasticity and predictability. The operating system and any system packages are explicitly defined and managed by code through Docker. A container is built from a Dockerfile, and docker-compose is used to coordinate the starting of multiple containers.

In production, however, orchestrating your microservice containers will lead you to Kubernetes as a way to manage all of your nodes. It comes with many advantages out of the box. For example, if one node fails, Kubernetes can automatically replace it with a healthy one; Kubernetes can handle load balancing across your network, and it can scale out resources based on usage.

The interesting thing about Kubernetes is that it's driven by APIs and CLI tools. This is immensely helpful for automation, but it's less useful if you're a human trying to get an overall picture of your network health and operation. This is a gap that Kong Konnect fills. Kong Konnect (and most Kong services!) keep Kubernetes' open protocols in mind. Rather than replacing Kubernetes, Kong works alongside it. The main focus of Kong Konnect is to ensure services are reliable and performant. Kong Konnect does this by providing observability into services, giving insights on uptime and traffic patterns, as well as the devices your users are connecting with.

Learn More

By 2022, more than 90% of enterprise companies will rely on cloud infrastructures. Several platform-as-a-service (PaaS) companies—like Azure and Google Compute—encourage developers to take a cloud native approach when designing their applications. Moving to a cloud native architecture should be a priority for your teams.

While it may seem intimidating to make the move, you can stand on the shoulders of giants. We have an article that provides more conceptual information on what a cloud native lifecycle looks like. When you're finished with that, you can build a demo app in under ten minutes using our Kong Konnect quickstart to see how advantageous a cloud native infrastructure can be.

CloudAPI Development

More on this topic

eBooks

Is Your New App a Candidate for Cloud Native Development?

Demos

API Security Automation: From Development to Deployment

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
CloudAPI Development
Share on Social
Garen Torikian

Recommended posts

Kong Konnect is now available on the Google Cloud Marketplace

Kong Logo
EngineeringJanuary 8, 2024

Now you can find and purchase Kong Konnect through the Google Cloud Marketplace! Kong Konnect is the unified API platform that allows you to manage multiple gateways across service meshes, ingress, cloud, and Kubernetes providers no matter where t

Erin Choi

Maintain Your Kong Gateway Audit Log Trail in AWS CloudTrail Lake

Kong Logo
EngineeringFebruary 7, 2023

A critical and challenging requirement for many organizations is meeting audit and compliance obligations. The goal of compliance is to secure business processes, sensitive data, and monitor for unauthorized activities or breaches. AWS CloudTrail

Danny Freese

Migration Options for IBM Cloud API Gateway Customers

Kong Logo
EngineeringDecember 14, 2022

IBM recently announced the deprecation of its Cloud API Gateway, a service used to create and manage APIs by placing a gateway in front of existing IBM Cloud endpoints. With this move, IBM Cloud Functions and IBM Cloud Foundry are no longer able to

Syed Mahmood

Rebranding DevOps as Cloud Engineering

Kong Logo
EngineeringFebruary 21, 2022

In this episode of Kongcast , Matt Stratton , a staff developer advocate at Pulumi , explains the history of configuration automation, the world of cloud engineering and how it compares to DevOps. Check out the transcript and video from our conve

Viktor Gamov

Try Kong on Kubernetes with Google Cloud Platform

Kong Logo
EngineeringOctober 10, 2018

The best way to learn a new technology is often to try it. Even if you prefer reading docs, hands-on experimentation is an ideal accompaniment to written instructions. Today I’m happy to announce the fastest and easiest way to try Kong on Kubernetes

Kong

Cut Costs, Boost Speed: The AI Proxy Revolution for Developers

Kong Logo
EnterpriseFebruary 14, 2025

The AI Proxy Revolution has changed the way developers operate, helping save money and speed up development. This new approach improves how systems talk to each other and allows for quicker, more efficient data exchange. Modern software development

Kong

Four Essential Best Practices for API Management in 2025

Kong Logo
EnterpriseNovember 1, 2024

The proper management of APIs is vital for organizations seeking to optimize their digital experiences and application performance. API management solutions facilitate the efficient administration of APIs by offering several features such as acces

Axandria Shepard

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2025