Enterprise
March 7, 2022
9 min read

Kong vs. Apigee: Flexible Is the New Strong

Nishikant Singh

The API management space is changing - fast. In the past couple of years alone, we've seen huge changes in the deployment patterns that our customers are adopting. In the past, when the use cases were fairly simple, organizations would deploy an API gateway as a SaaS monolith in the cloud, sitting at the edge of the network. They did this because it was the best option available at the time, and the first wave of API management vendors like Apigee had a solution that could support it.

However, the world has changed considerably since then. The number of APIs has grown exponentially (over 100% year-over-year growth according to Gartner Research), and APIs are not just being exposed at the edge of the network but internally too. This has resulted in the need to deploy your API gateway on the cloud AND fully on-premises AND anywhere in between to support all of the traditional, new and emerging API interaction patterns such as hybrid.

In this blog post, I'll walk through why it's important that your API gateway provides the flexibility and agility to grow with your infrastructure and business as things change. This is one of the key reasons why we built Kong - to fill the gaps left by first wave vendors like Apigee. Where your API gateways are deployed, how they are deployed and how they are configured might be obvious now, but it is likely that in 12 or 18 months, these will be very, very different!

Check out the entire Kong vs Apigee Report for a complete overview of all features and capabilities

Your Gateway Needs to Run Where Your APIs Do

The first thing that I think about when I'm working with our customers to plan a deployment of an API management platform is where we are going to deploy it. Every customer utilizes different infrastructure, whether that be bare metal, virtual machines, containers deployed on Kubernetes or serverless platforms. It is critical that the same API gateway runtime can be deployed on these different platforms - it is no longer acceptable to have to deploy different gateway runtimes that support air-gapped networks or deployments in countries with strict data residency laws versus what you might be deploying onto your public cloud infrastructure!

An estate of completely different API gateway runtimes means that your API program becomes fragmented with separate developer portals, inconsistent observability, completely different CI/CD pipelines, and you often have to accept the lowest common denominator when it comes to the delta in functionality between the different gateways.

Many customers are on a journey with their infrastructure. They might start by deploying their API gateways on virtual machines with a goal of moving them to Kubernetes in 18 months. Most commonly, this means starting on-prem, then moving to the cloud, and then on towards multi-cloud. Kong provides unbeatable flexibility to support this journey.

We allow customers to first deploy onto bare metal, virtual machines, Docker and then take that same runtime and deploy it on their preferred distribution of Kubernetes on-prem and across one or more clouds. With traditional API management platforms like Apigee, we’ve seen customers who simply couldn’t achieve this degree of deployment control, severely jeopardizing their infrastructure journey.

Unlike many traditional API management vendors, being an independent offering from any of the public cloud vendors allows Kong to be truly cloud-agnostic. Cloud agnostic means that you have the flexibility to deploy the Kong platform on any cloud of your choosing, whether that be on your private cloud, GCP, AWS, Azure, leveraging their container management, managed Kubernetes or serverless platforms. More importantly, it means that you can deploy in all of these environments without compromising on functionality, performance or security.

No matter what public cloud vendor you work with, Kong can seamlessly integrate with the native services that they offer - something that you simply will not get from an API management platform tied to one of the major public cloud vendors like Apigee is with GCP. For example, API management platforms often need somewhere to store state to be configured alongside the gateways to store your API configuration, consumer credentials, configuration metadata, etc.

Kong relies on PostgreSQL as its state store (although it can also be run without a database!) but can leverage any of the managed PostgreSQL offerings such as AWS RDS Aurora, GCPs CloudSQL or Azure Database. Alternative solutions like Apigee require that you manage and run your own Cassandra cluster just because you want to distribute your hybrid API gateway away from GCP - you should be able to leverage managed database offerings to reduce the overall maintenance overhead.

We also keep things simple. We are flexible to support running the Kong platform on any CNCF conformant version of Kubernetes (v1.19+ at the time of writing), meaning that you are free to use GKE, VMWare Tanzu, AKS, EKS, DigitalOcean Kubernetes, etc.

Unlike Apigee hybrid, there is no need for any GCP Anthos abstraction layer, which adds unnecessary bloat and complexity when you simply want to run your API gateways on Kubernetes. If you do already have GCP Anthos deployed, then Kong can of course run on your Anthos cluster without issue!

Deployment your way

Once you have selected the infrastructure that you will deploy the platform to, we now need to decide on how we are going to deploy our API management platform on that infrastructure. Unlike Apigee, Kong offers a huge array of options, meaning that the gateway can be deployed in a manner that makes sense to your particular use case and environment.

"The team appreciated Kong's deployment flexibility, performing well across cloud, on-premise and hybrid scenarios. In the past, we had tried a cloud-based API management solution. We sunsetted that because it was costly and not aligned with our requirements," said Mike Shade, DevOps engineer. "Most of our environment is on-premise, but due to the size of its configuration file, we saw a lot of latency when deploying changes. After a short POC, the team moved into implementing Kong as its API management solution."

You may start with your gateway deployed all in one single node but then plan to move to different patterns as your API program gets some traction. Some of the patterns that you may consider then moving towards:

  • Distributed Microgateways: Many traditional API management platforms provide the option to deploy their full solution or to deploy a completely different microgateway runtime. Kong can easily support the microgateway pattern as Kong IS a microgateway (The Kong binary is ~30MB), meaning that the configuration is portable and you can leverage the same CI/CD pipelines for pushing this config.
  • Hybrid: With the complete separation of the control plane (CP) and data plane (DP), Kong Gateway fully supports hybrid deployments. In this context, the control plane is responsible for administration tasks, while the data plane is exclusively used by your API consumers. Kong gives you the flexibility to self-host your control plane OR to have Kong host this for you as SaaS. This is especially useful in tightly regulated environments where even analytics or configuration metadata can't leave the data center or the country.
  • Kubernetes Ingress: As organizations move towards running their microservices on Kubernetes, there is usually a lot of excitement around the fact that Kong can be deployed as a Kubernetes Ingress, routing and managing the API traffic into your cluster. Unlike many of the traditional API management platforms which require Istio or another Kubernetes ingress to front their API gateway, Kong Gateway can simplify the architecture significantly by providing both the Ingress and API gateway functionality.

The great thing about all of these different deployment patterns is that they can all leverage the same API runtime binary, meaning that the configuration that you create can be deployed to any of these - no more having to refactor your API proxy code just because you are changing the deployment pattern. The same configuration will function whether you have configured the platform as a microgateway, Kubernetes ingress or centralized gateway in the cloud.

This is a major shift away from how you configure Google's API offerings - you will need to create and manage completely different configurations which support different functionality across Apigee X, Apigee OPDK, Apigee Adapter for Envoy, Apigee Microgateway, GCP Cloud Endpoints or GCP API Gateway! This nicely segues us into how we go about configuring the Kong platform.

Tailor configurations to your needs

All API management platforms on the market offer a base set of functionality through Policies or plugins, which can be implemented to deliver functionality like API Key Authentication, Caching, etc. Kong provides over 60 plugins out of the box so the majority of API management requirements are catered for. This list of bundled plugins also includes a number of plugins supporting modern capabilities:

The inclusion of these supported plugins means that unlike with Apigee, you do not need to develop these from scratch with custom code just to extend the functionality of the platform to support modern requirements.

My particular favorite plugin is the OIDC plugin, which allows you to provide a fully OIDC-compliant API interface to your consumers by simply filling in four fields (although the plugin is extensively configurable with 200+ options, meaning that you have the flexibility to integrate with any OIDC compliant identity server that you have in your organization!). No more wrestling with OAuth 2.0 policies and trying to trick them into supporting OIDC, which is often the case on some of the legacy API management vendors.

We recognize that many organizations have their own edge use cases, so we provide tooling to quickly and easily build your own Kong policies if required. The Kong open source community is extremely active, and there are hundreds of additional policies that have been written by the community and are available as open source plugins. If you want to build your own, then you can use the provided plugin development kits, which allow you to implement your own functionality in JavaScript, Python, Go and Lua.

Watch this space for more extensibility through WasmX where we will be bringing the power of WebAssembly standards into Kong Gateway, enabling Kong plugins support for any Envoy filter out of the box, plus the ability to write new plugins with even more programming languages!

This level of flexibility in extending the functionality of your API gateways is only possible when open source is at the heart of the product. An active community of contributors is simply not available when leveraging products like Apigee, which are built on proprietary code.

Often when configuring API gateways, it's not really about what you are configuring them to do but how you actually deliver that configuration on the platform. One of the really nice things about Kong is that we allow you to configure your gateway in a declarative fashion, in the same way that you configure your applications running on Kubernetes.

We made the design decision to use YAML to represent the gateway configuration, meaning that the config can easily be managed as code vs. the heavy XML and Maven-centric experience that many of the API management 1.0 platforms like Apigee offer.

One of the biggest challenges when developing against traditional API management solutions such as Apigee is that you often have to develop and test your API proxy logic on the cloud or against a local emulator, which is not a like-for-like comparison to what you will be pushing your code to.

As a developer myself, I love the fact that I have the flexibility to deploy the entire Kong platform locally on my laptop, on Docker Desktop, on minikube, on Vagrant boxes, etc. That ensures that I am developing against the exact same runtime that I will be deploying my code to.

No more "but it worked on my machine!" This is especially important when you have an environment where you have deployed Kong on many different types of infrastructure - you might have Kong running on AKS as a Kubernetes ingress controller, on virtual machines as a centralized gateway and on serverless platforms such as GCP Cloud Run. You can develop the configuration locally knowing that you will be deploying to the exact same runtime

In summary

To wrap up, I love the fact that Kong provides our customers the flexibility to deploy their API gateways across any infrastructure without compromise, to support any architecture allowing them to deploy the platform as an ingress controller, centralized gateway or distributed gateways and when actually required, the flexibility to extend the platform using community provided plugins or even build their own extensions using modern programming languages like Go.

But don't take my word for it. Come see a demo of Kong in action today!

READ MORE:

Developer agility meets compliance and security. Discover how Kong can help you become an API-first company.