Enterprise
June 3, 2021
4 min read

Containerization in a Cloud Native World: An Interview With Reza Shafii

Josh Molina

Multi-cloud infrastructure is changing the way companies approach their software architecture. What started solely as gateway traffic management has evolved into full lifecycle API management.

I recently sat down with Reza Shafii, Kong's VP of product, for a blog series where we explore how full lifecycle service management ties into the concept of cloud native. We break these down into a three-part series focused on the following trends: containerized infrastructure, microservices and multi-cloud. In this post, we will focus on containerization in a cloud native world.

Why are organizations migrating to containers?

Reza: As a company, you likely have many existing applications. How do you know if you should move them or not? There is a saying in the enterprise software world: "We don’t throw out the garbage - we just layer on top of it."

So while a good chunk of applications are going to stay, some are naturally going to migrate. That is because the benefits of a containerized infrastructure - in terms of being able to save costs and move faster - outweigh the benefits of leaving it alone.

These days, almost every organization is choosing to containerize infrastructure by default. Over time, there are going to be even more containerized applications. The entity living in the container needs to be fully configurable through a containerized infrastructure API lifecycle management model.

What are the benefits of moving to containers?

Reza: The reasons are many. For one, containers are lightweight as compared to virtual machines. They are also platform agnostic, meaning you can deploy them anywhere - in the cloud or on-prem. This gives enterprises the flexibility to choose the vendor of their choice (no vendor lock-in) based on their geographic location, budget needs, etc.

Containerized apps can easily scale up and down since it’s very easy to spin up a new instance of your app. It fully supports your microservices strategy. Kubernetes simplifies the management of all these containerized applications.

Containers enable the transition to microservices. With containerization, you can easily break up an application into microservices. Development teams can break up existing monoliths and then build new applications as part of microservices. This way, they have flexibility to change faster and to save costs.

Why does containerization save costs?

Reza: The best analogy I have heard on this is that it's like a Tetris game. Think of the full rectangle as your total compute capacity. Kubernetes plays the perfect Tetris game with the pieces that are falling to make sure you’re always packed just right. That way you take full advantage of your overall compute capacity.

Whereas if you just add your virtual machines - you put application "A" on five VMs and application "B" on the others - nothing is playing the Tetris game for you. That can cause your cloud provider bill to go through the roof. So containerization saves costs while allowing you to be more agile and providing a more consistent surface area across multiple cloud providers as a bonus.

What are the limitations of legacy technology when it comes to containerized infrastructure?

Reza: Legacy technology solutions expect you to change the state of the container for every change. This is done either by going through a Web UI - which is a very lift-and-shift heavy approach (not automatable) - or by going through the actual container generation process itself and re-deploying on it.

This doesn’t work in the containerized infrastructure world.

As API gateways expand and contract in a dynamic, cloud native environment like Kubernetes, the question remains: Can they adjust themselves? How can they reconfigure themselves? This is where declarative configuration-driven change management - what we call APIOps for the service lifecycle aspect - comes in, and most of the previous generation gateways don’t do that well.

In contrast, with Kong Gateway, a simple scale command in Kubernetes can expand to five or 10 gateways. That elastic aspect can happen without a hitch because Kong is container native and fully declarative configuration-driven, allowing for the new pods on Kubernetes to pick up the right configuration on the fly.

How does this tie into the concept of Cloud Connectivity?

Reza: Organizations have more and more pieces of their overall applications that need to talk to each other over remote interfaces. These pieces need to talk securely, reliably and at scale. That’s what we call connectivity logic.

So how do you enable the creation of this connectivity logic so that retries are done right and security is injected right and in a consistent, reliable manner? That’s where full lifecycle API management solutions come in.

You need to do this in a containerized world, a microservices world and a multicloud world. Kong is the only player that can address the customer's full lifecycle of your API management needs in a cloud native way, while being able to also act as a strategic long-term partner.

In the next blog post in this series, we will explore the second trend in cloud native: why organizations are creating more and more microservices and how that is changing application infrastructure by creating more APIs and more connectivity.