• The API Platform for AI.

      Explore More
      Platform Runtimes
      Kong Gateway
      • Kong Cloud Gateways
      • Kong Ingress Controller
      • Kong Operator
      • Kong Gateway Plugins
      Kong AI Gateway
      Kong Event Gateway
      Kong Mesh
      Platform Core Services
      • Gateway Manager
      • Mesh Manager
      • Service Catalog
      Platform Applications
      • Developer Portal
      • API and AI Analytics
      • API Products
      Development Tools
      Kong Insomnia
      • API Design
      • API Testing and Debugging
      Self-Hosted API Management
      Kong Gateway Enterprise
      Kong Open Source Projects
      • Kong Gateway OSS
      • Kuma
      • Kong Insomnia OSS
      • Kong Community
      Get Started
      • Sign Up for Kong Konnect
      • Documentation
    • Featured
      Open Banking SolutionsMobile Application API DevelopmentBuild a Developer PlatformAPI SecurityAPI GovernanceKafka Event StreamingAI GovernanceAPI Productization
      Industry
      Financial ServicesHealthcareHigher EducationInsuranceManufacturingRetailSoftware & TechnologyTransportation
      Use Case
      API Gateway for IstioBuild on KubernetesDecentralized Load BalancingMonolith to MicroservicesObservabilityPower OpenAI ApplicationsService Mesh ConnectivityZero Trust SecuritySee all Solutions
      Demo

      Learn how to innovate faster while maintaining the highest security standards and customer trust

      Register Now
  • Customers
    • Documentation
      Kong KonnectKong GatewayKong MeshKong AI GatewayKong InsomniaPlugin Hub
      Explore
      BlogLearning CentereBooksReportsDemosCase StudiesVideos
      Events
      API SummitWebinarsUser CallsWorkshopsMeetupsSee All Events
      For Developers
      Get StartedCommunityCertificationTraining
    • Company
      About UsWhy Kong?CareersPress RoomInvestorsContact Us
      Partner
      Kong Partner Program
      Security
      Trust and Compliance
      Support
      Enterprise Support PortalProfessional ServicesDocumentation
      Press Release

      Kong Expands with New Headquarters in Downtown San Francisco

      Read More
  • Pricing
  • Login
  • Get a Demo
  • Start for Free
Blog
  • Engineering
  • Enterprise
  • Learning Center
  • Kong News
  • Product Releases
    • API Gateway
    • Service Mesh
    • Insomnia
    • Kubernetes
    • API Security
    • AI Gateway
  • Home
  • Blog
  • Enterprise
  • How to Develop a Cloud Native Infrastructure
Enterprise
September 7, 2021
6 min read

How to Develop a Cloud Native Infrastructure

Garen Torikian

More and more companies are eager to move their operations to the cloud. Yet, there's quite a bit of ambiguity on what moving to the cloud actually means. Is your business running in the cloud while you host your database on another platform or while you rely on a third-party service to handle your payments? That's a good start for moving to the cloud, but there are many other aspects to consider when building a cloud native infrastructure.

Embracing a cloud native infrastructure requires you to rethink how to build, deploy and run your software.


The Cloud Native Computing Foundation (CNCF) defines a cloud native infrastructure as one with "[c]ontainers, service meshes, microservices, immutable infrastructure, and declarative APIs" as typical characteristics.

This begins with isolating services within your application and moving towards decoupled and secure microservices. Further, a cloud native infrastructure requires automation through continuous delivery. Lastly, your ability to leverage containerization will go a long way toward moving you to the cloud. In this post, we'll take a closer look at these key features of a cloud native infrastructure.

Beginning With Service Isolation

When it comes to architecting an app, there are often two competing organizational methods: monoliths and microservices. A monolith is a form of organizing software in a single codebase; every feature and aspect of your app is self-contained. Designing microservices involves setting up separate, smaller applications. Each small app controls a single aspect of your app's functionality.

Transitioning From a Monolith to Microservices

Cloud native infrastructure doesn't need to be a collection of microservices. But sooner or later, you might find yourself needing to scale out one part of your application. Monoliths are easy to get started with, but after a certain growth stage, microservices become easier to administer and configure.

If you change one microservice, the worst-case scenario would be taking down a single feature. In a monolith, a change could impact the entire application.


This almost forces you to think about application design in different terms—redundancy and message queuing, anticipating and recovering from errors—aspects which a monolithic application obscures. In other words, the failures on one service do not affect any other that makes up your application.


Learn more about the process for transitioning from a monolithic to a microservices-based architecture. Download this eBook >>

Facilitating Communication Between Microservices

Microservices communicate by passing messages and data through APIs. We don't typically expose those APIs to the public. Nonetheless, you could set up an API gateway through Kong Konnect to help route external requests to internal services.

An API gateway provides a "big picture" overview of how services interact with one another and the outside world.


The communication protocol used in a cloud-native infrastructure doesn’t matter much. You could use REST, gRPC, GraphQL or anything else. What does matter is using a consistent standard and ensuring all commands pass through these channels.

Keeping Security in Mind

Isolating your app's functionality into microservices provides other security benefits as well. Earlier, I mentioned how buggy code would only affect one microservice. The same is true for larger problems, such as security issues. If an external agent gains control of one server, they're less likely to hop to another server due to network isolation rules. You can apply these rules across your entire infrastructure and manage them from one location.

In a true cloud native infrastructure, even your database acts as a microservice. It prevents storing user data locally. Kong Mesh and the CNCF's open source Kuma grant observability of traffic rules, logs and permissions to maintain the strength of your cluster.


How can a service mesh help you achieve bank-grade security for microservices? Find out in this eBook >>

Automating With Continuous Delivery

Deployment and delivery for a monolithic application can be cumbersome, brittle and time-consuming. Moving toward a cloud native infrastructure means moving toward a DevOps approach. That means automating all aspects of application delivery.

Independently Updated Microservices

With your services separated, you can now update each one independently. Here again, is another advantage of microservices over the monolith. A monolithic app might have thousands of tests that take time to complete. The deployment might also take a long time, as services restart and dependencies update. With microservices, your application separates into individual code repositories. Updating one aspect of your app is much faster because everything is scoped and isolated. Your test suites are smaller, and language dependencies are fewer.

Infrastructure-as-Code

You've likely already embraced a DevOps mindset, which is a requirement for a cloud native application. In this setup, all aspects of infrastructure management—deploying the code, load balancing your servers, scaling your resources, etc.—are performed with scripts. This turns your processes into testable, versionable pieces of code, and it helps ensure that applications are modified and upgraded consistently.

Interconnected Scaling and Monitoring

For a cloud native app, all of your datastores, servers and other resources should be able to expand (or reduce) at any moment. If you run out of disk space on one microservice, the process of enlarging the disk should be repeatable for any other microservice, too. This will lead to a situation where infrastructure management could even tie into your monitoring software. As your monitoring solution detects the need for more resources, it alerts your infrastructure management system to scale up the necessary resources.

For example, imagine a situation where your monitoring system sees increasing traffic for a certain service. With scripts in place for automated processes, the monitoring alert can trigger a service deployment to additional nodes to handle the uptick of activity. Or perhaps an erroring incident occurs on a service. The monitoring system can trigger the process for locking down that service for troubleshooting while spinning up a separate service to handle subsequent requests.

All of this is possible because each of your microservices focuses on one task. You can modify the behavior of each microservice through API calls.

Leveraging Containerization

By understanding the above requirements, containerization becomes a more reliable approach for setting up and running your app. Running your application in a container allows you to build software with greater consistency, elasticity and predictability. The operating system and any system packages are explicitly defined and managed by code through Docker. A container is built from a Dockerfile, and docker-compose is used to coordinate the starting of multiple containers.

In production, however, orchestrating your microservice containers will lead you to Kubernetes as a way to manage all of your nodes. It comes with many advantages out of the box. For example, if one node fails, Kubernetes can automatically replace it with a healthy one; Kubernetes can handle load balancing across your network, and it can scale out resources based on usage.

The interesting thing about Kubernetes is that it's driven by APIs and CLI tools. This is immensely helpful for automation, but it's less useful if you're a human trying to get an overall picture of your network health and operation. This is a gap that Kong Konnect fills. Kong Konnect (and most Kong services!) keep Kubernetes' open protocols in mind. Rather than replacing Kubernetes, Kong works alongside it. The main focus of Kong Konnect is to ensure services are reliable and performant. Kong Konnect does this by providing observability into services, giving insights on uptime and traffic patterns, as well as the devices your users are connecting with.

Learn More

By 2022, more than 90% of enterprise companies will rely on cloud infrastructures. Several platform-as-a-service (PaaS) companies—like Azure and Google Compute—encourage developers to take a cloud native approach when designing their applications. Moving to a cloud native architecture should be a priority for your teams.

While it may seem intimidating to make the move, you can stand on the shoulders of giants. We have an article that provides more conceptual information on what a cloud native lifecycle looks like. When you're finished with that, you can build a demo app in under ten minutes using our Kong Konnect quickstart to see how advantageous a cloud native infrastructure can be.

Topics:Cloud
|
API Development
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance
  • © Kong Inc. 2025