Blog
  • Engineering
  • Enterprise
  • Learning Center
  • Kong News
  • Product Releases
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
  • Home
  • Blog
  • Product Releases
  • Kong's Dedicated Cloud Gateways: A Deep Dive
Product Releases
June 18, 2025
18 min read

Kong's Dedicated Cloud Gateways: A Deep Dive

Michael Field
Principal, Technical Product Marketing Manager, Kong

In case you missed it, we recently made a big announcement around beta GCP support for Kong’s Dedicated Cloud Gateways (DCGWs). There’s a lot of good stuff in there, but TL;DR DCGWs now support all three of the major cloud service providers (CSPs): AWS, Azure, and GCP at a 99.95% SLA with support for over 25 regions around the globe.

Being the first API management vendor to support managed gateway deployments with all three CSPs has a lot of folks excited, for obvious reasons. But inevitably, this excitement leads to questions as organizations are trying to evaluate whether DCGWs can work for their specific use case and meet all their compliance and regulatory requirements.

In recent weeks, we’ve made DCGWs fit for an even wider array of use cases with major improvements like custom plugin support and adding VPC peering as an option in addition to transit gateways. We’ll get into all that in a minute.

But first, why even consider making the switch?

Why switch to Dedicated Cloud Gateways?

Well, the drivers for moving to managed cloud gateways are simple enough, as they mirror all the reasons behind why you would want to move any workload to the cloud. By choosing a DCGW, you benefit from faster time to market, easier scalability, and reduced operational overhead — allowing you to avoid the complexity of managing on-prem hardware and/or your own production-grade cloud deployment.

It shifts costs from large upfront investments to predictable operational expenses, while providing things like rapid and flexible deployments (e.g., multi-region support with all three CSPs), robust security (e.g., SOC II Type II compliance), and seamless global reach (e.g., global active-active, latency-based load balancing by default). This approach lets you focus on innovation and service delivery rather than the maintenance challenges inherent in hosting a gateway yourself.

Sounds great, right? So why do hybrid deployments still dominate production API gateway deployments?

Historical limitations and a quick history of hybrid deployments

API management largely began as a fully on-prem affair but started offering native cloud services as the broader cloud computing era began taking off in the early 2010s. However, there was a relatively quick response to the cloud trend where organizations were happy to offload the management of the control plane but were much less keen to move the actual flow of their gateway traffic into the cloud. There are a number of reasons for this:

  • Cloud vendor lock: Most API management vendors only offered cloud deployments with a single CSP
  • Security and regulatory concerns: There are always concerns with moving data through a managed cloud solution and limited deployment options could cause issues with data residency
  • Configuration limitations: While cloud gateways simplified deployments, they often came at the expense of the flexibility you get with a self-hosted deployment
  • Performance degradation: Traditional deployment options for cloud gateways often meant adding extra hops between the fully managed gateways and the customer’s backend APIs and services

Now, admittedly, many of these concerns with fully managed gateways can be addressed by using a CSP’s gateway offerings instead of a dedicated API management vendor. So, for example, if you’re a full GCP shop, then Apigee could be a compelling offering as they provide simple but fully customizable and secure cloud deployments with private connections to all your backend GCP APIs and services.

But the truth is, many organizations don't want to be locked into one cloud vendor. Of course, this is why Apigee released its own hybrid offering back in 2019. The image below is actually from their announcement video where they declare “multi-cloud is the reality.”

Source: Apigee announcing hybrid support in 2019

Here at Kong, we happen to agree! And while we fully support dead-simple hybrid deployments, we don’t think it should be the only way an organization can have a multi-cloud experience with their API gateway.

So, what have we changed with DCGWs to make this a reality?

DCGWs: A new era for API gateways

DCGWs offer a multitude of functionality that make them a true gamechanger in the space. The rest of this blog will focus on breaking down why we built each of these features, along with all the nitty-gritty technical details to help you determine if this solution fully addresses the needs of your organization.

  1. Cloud-agnostic deployments with CSP-grade SLAs
  2. CSP ease of deployment with self-hosted flexibility
  3. Smart global DNS
  4. Securing backend traffic
  5. Custom plugin streaming
  6. Observability: Advanced Analytics, DCGW logs, active tracing, and external tooling
  7. APIOps and migrations

Cloud-agnostic with CSP-grade SLAs

Of course, we have to start here. Kong is the only provider of API gateways that can be deployed as a fully managed solution on all three major CSPs and provide an SLA of 99.95%. This is huge as it matches or exceeds the SLA of all major CSPs for single-region deployments.

Why Kong built this is clear: multi-cloud isn't the future, it's the reality for the vast majority of organizations. And with Kong, you don’t need to sacrifice the benefits of a production-ready, fully managed gateway solution to achieve your multi-cloud strategy.

The details

Assuming a 30-day month, an SLA of 99.95% allows for 21.6 minutes of downtime per month. Not too shabby for a production-grade deployment you can have up and running in about the same amount of time.

Still find that too much? Kong plans to support an SLA of 99.99% for multi-region deployments. This only allows for 4.32 minutes of downtime a month.

You can reference the supported regions for each CSP here in the docs. It should also be noted that we’re regularly expanding the regions on offer and are always open to adding new regions at the request of a customer.

CSP ease of deployment with self-hosted flexibility

Traditionally, deployment simplicity came at the expense of deployment flexibility. And deployment flexibility doesn’t just mean where you could deploy (again, we now have full coverage here), but also how you could go about configuring your individual deployments.

Kong’s DCGWs allow you to configure all aspects of your gateway deployment, from the performance to modifying the behavior of the gateway itself.

The details

Deploying a DCGW is incredibly simple — it only takes a few clicks in Kong Konnect to get your DCGW up and running.

The initial deployment of your DCGW network can take up to 30 minutes, which includes everything powering your DCGWs under the hood — like the VPC, NLB, EKS cluster, and support orchestration and management nodes. After the initial deployment, additional data planes only take a minute or two to be deployed on an existing network.

The only configuration required is selecting your CSP, region, and setting the pre-warm requests per second (RPS) for the DCGW cluster. This effectively sets a minimum size for your DCGW infrastructure and ensures the cluster can handle that RPS at all times without requiring additional scaling.

DCGWs then operate in autopilot mode, which automatically scales the underlying infrastructure in real time to meet the demands of your dynamic API traffic volume and maximize the utilization of the underlying nodes to save on costs. Kong can provide performance benchmarks for automatic scaling of DCGWs by request. It does vary by cloud provider.

Configuring Kong’s DCGWs goes beyond just the underlying infrastructure and includes the gateway itself. Of course, you can set the actual gateway version and choose to always auto-update to the latest version. But, perhaps more importantly, you can modify the actual gateway configuration to do things like enable OTEL and modify the trace sample rate, change the logging level, update default headers, and much more by setting environment variables.

This is a massive improvement as managed gateways are often simple to deploy, but other APIM vendors will leave you with a one-size fits all gateway.

Smart global DNS

Managed gateways often demo quite well. You quickly deploy a single gateway, apply a couple of policies, and pretty soon you’re sending your first requests to fully secured and monitored backend APIs. However, this rarely mirrors a real production deployment. What about high availability? How are you load balancing? How do you add your custom domain name?

These are all things Kong knew needed to be just as simple to manage as that first gateway deployment, with the ability to customize for advanced setups as needed. Kong automatically deploys a global and region-based smart DNS that chooses the best region to use for each API request based on real-time performance and latency affinity. And if you want a custom domain? It’s as easy as adding two CNAMEs generated by Kong to your domain’s DNS record.

The details

To understand how the global edge DNS works, let's first outline the core primitives at play:

  • Data Plane Node: an instance of a cloud gateway
  • Date Plane Group: one or more cloud gateways deployed in a specific region of a specific cloud provider (a.k.a. a network)
  • Network: an AWS, Azure, GCP, or other cloud provider region supported by Kong in which one or more Cloud Gateway Nodes are deployed — a customer may have one or more Cloud Gateway Networks in any one region
    • When creating a network, you can select the underlying availability zones (AZs) for that region, which allows you to avoid inter-zone traffic between the DCGW and backend services. If you select multiple AZs, the gateways can span across AZs for a single data plane group. There are several factors involved when deploying to multiple AZs, and the data planes are spread on a best-effort basis to ensure high availability.

For each data plane group (a collection of DCGWs deployed in the same network), we provision a network load balancer (NLB) and then expose a regional DNS in the Konnect UI. The NLB takes an active-active approach to load balancing.

Kong then provisions a Public Edge DNS that can communicate with every data plane group in the control plane. The Public Edge DNS will choose the best region to use for each API request based on real-time performance and latency affinity. This works by leveraging Route 53’s latency-based routing.

In short, this means that implementing multi-cloud and multi-region connectivity with Kong Konnect is as easy as sending requests to the Public Edge DNS.

Custom domains

As mentioned above, Kong provisions a domain for you automatically. But of course, many organizations will want to use their own. This is fully supported and as easy as providing your fully qualified domain name and then adding the generated CNAMEs to your domain’s DNS records. These entries will allow Kong to handle domain ownership validation and automatically provision certificates via Let’s Encrypt.

NOTE: If your domain has a Certificate Authority Authorization (CAA) record, you'll need to add an entry for Google Cloud Public CA, which Kong uses to provision certificates. Without this, Kong’s automated TLS certificate issuance may fail.

Once the DNS records are correctly configured, Kong will attempt to validate the domain ownership via the ACME challenge. This process may take some time depending on DNS propagation speeds. During this period, Kong will automatically request and provision a TLS certificate for your custom domain.

Securing backend traffic

A big driver behind the push for multi-region and multi-cloud deployments is the concern around regulatory compliance in different industries. This is especially critical for industries like finance, healthcare, or government, where data sovereignty laws (e.g., GDPR, HIPAA, CLOUD Act) mandate that customer data must not leave certain jurisdictions. DCGWs make it dead simple to have, for example, a healthcare provider operating in the U.S. and Canada to deploy separate DCGWs in us-east-1 and ca-central-1, ensuring sensitive patient data remains within each country.

Some of these regulations and general security best practices mandate that sensitive data should not traverse the public internet, even if the connection is encrypted with something like mTLS. In the DCGW world, the managed gateway instances run within the Kong-managed network stack (Kong-managed compute and network layer) whereas all the customer services/upstreams are running on the customer-managed infrastructure. DCGWs offer customers the flexibility to configure how traffic flows between the DCGWs in Kong VPCs and the backend services in customer VPCs: encrypted over the public internet or through an encrypted private connection.

The details

For both public and private DCGWs, Kong provisions a network load balancer (NLB) with static IP addresses. Public gateways will automatically receive a regional and global DNS tied to the IP addresses as discussed in the previous section. However, private gateways, as the name suggests, will not be available on the public internet. Instead, the IP addresses will only be accessible through a CSP-dependent private connection.

The method of establishing this private connection varies by CSP. The docs go into details on how to configure each of the available private networking options:

  • AWS: Transit Gateway and VPC peering
  • Azure: Azure VNET peering
  • GCP: DCGWs will not support private connections until the upcoming GA release in Q2 2025

It’s important to note that both private and public gateways will use a NAT gateway to assign static egress IPs. For public gateways, all egress traffic will be routed through the NAT gateway. For private gateways, traffic targeting the CIDR range defined during the setup of a private network gets routed directly to the respective private networking components (e.g., Transit Gateway for AWS). All other traffic goes through the NAT gateway as egress traffic. Optionally, if the external API or service the DCGW is communicating with wants to whitelist IPs of incoming traffic to restrict incoming communication to the DCGW, they can use the static egress IPs.

Transit gateway vs VPC peering

For AWS DCGW deployments, Kong initially supported private networking via Transit Gateways but has recently added support for VPC Peering. So why might you choose one over the other?

Transit Gateways implement a hub-and-spoke model. A single Transit Gateway can connect to many VPCs, making it ideal for larger or more complex network topologies, especially when you want to avoid creating and managing a full mesh of peering connections between VPCs. In Kong’s case, each DCGW network can be associated with a single Transit Gateway, and that gateway becomes the central routing point between Kong and your private infrastructure.

However, setting up a Transit Gateway is more involved as it requires additional IAM permissions, route table updates, and coordination with your AWS infrastructure team. It's also typically more expensive, with charges for each VPC attachment and all traffic processed through the gateway.

VPC Peering, on the other hand, uses a point-to-point model. It's simpler and cheaper to set up when connecting Kong to a small number of VPCs. With the latest updates, each Kong Dedicated Cloud Gateway network can support multiple VPC peering connections, allowing for flexible integration with customer environments without the overhead of managing a Transit Gateway.

When you're only dealing with one or two VPCs, VPC Peering often makes more sense as you’re only charged for cross-AZ or cross-region data transfer. By contrast, Transit Gateways always incur per-GB processing charges, even within the same AZ, and cost more as you add additional attachments.

Managing private DNS

When connecting Kong to your VPC, DNS resolution becomes critical, especially if your upstream services are referenced by private domain names (e.g., internal-api.company.local). Kong DCGWs support two main DNS strategies for resolving these internal names.

Private Hosted Zones

If you’re using Amazon Route 53 Private Hosted Zones, Kong can be configured to resolve DNS records directly from your private DNS zones. Support for the equivalent in Azure and GCP will come later this year.

Kong’s private DNS support allows the gateway to query your hosted zone as if it were inside your VPC with no special changes needed to your DNS setup. This works well with both VPC peering and Transit Gateway, assuming the proper VPC association and DNS resolution settings are in place.

You can learn more about how to set this up here.

Outbound DNS Resolver (Forwarding to Your DNS Servers)

For more complex environments, such as hybrid clouds, multi-account setups, or when you’re not using Route 53, Kong also supports outbound DNS resolvers.

This allows Kong to forward DNS queries to a custom DNS server, such as:

  • A corporate DNS server running on-prem
  • A third-party DNS appliance like Infoblox
  • A centralized DNS service shared across VPCs or regions

This configuration is useful when:

  • Your internal DNS is managed outside Route 53
  • You need control over DNS forwarding paths
  • You want Kong to use a single entry point for all name resolution

You can learn more about how to set this up here.

Custom plugin streaming

For the uninitiated, Kong custom plugins are powerful extensions that allow you to apply your own custom rules or logic on the Gateway. After creating a custom plugin, you need to deploy it to the Kong Gateway. Traditionally, this meant building a custom image that contains the custom plugin code, mounting the plugin to your container/pod, or overriding the default plugin path for bare metal or VM deployments. None of these are great options for a fully managed DCGW.

Enter custom plugin streaming.

Custom plugin streaming allows you to quickly distribute custom plugins through Konnect and manage the plugin how you see fit — through the GUI or declaratively with decK. This provides a number of major advantages. It allows for easy and immediate distribution to all the data planes connected to the target control plane. It improves your security posture as the control plane becomes the source of truth for custom plugins, which can be managed centrally in Konnect, thereby removing security risks and challenges around managing different plugin versions across different data planes

Custom plugin streaming is the biggest change in plugin architecture since Kong was released. And now you get it in Konnect with DCGWs.

The details

Every custom plugin contains two key files: schema.lua and handler.lua. The schema file defines the plugin's configuration structure and outlines the parameters the plugin will expect. The handler file contains the core logic of your plugin and how it interacts with the request/response flow (e.g., modifying headers, body, interacting with other services, etc.).

Currently, custom plugin streaming has some additional requirements:

  • Unique name per plugin
  • One handler.lua and one schema.lua file
  • Cannot run in the init_worker phase or create timers
  • Must be written in Lua

You can follow the docs here to learn how to actually build a custom plugin. Then you simply need to upload the schema and handler file to Konnect through the GUI or Admin API to create your custom plugin on the target control plane and distribute it to any connected Gateways.

Once created, the custom plugin can continue to be managed through the Konnect UI, or, as is common in production deployments, you can manage your custom plugins declaratively through Kong’s decK CLI tool or terraform provider. This is quite powerful as we’ll discuss further in the APIOps section.

Observability: Advanced Analytics, DCGW logs, active tracing, and external tooling

Oftentimes, people think of the API gateway as the enforcer — ensuring APIs are protected from bad actors and only authorized traffic is allowed through. But modern API management solutions go far beyond enforcement. They need to provide observability into all aspects of API traffic by providing detailed metrics, logs, and traces. Moving to a cloud gateway should not require compromise in any of these observability pillars.

Kong DCGWs provide first-class support for metrics, logs (API request and gateway logs), and traces directly inside Konnect with the ability to easily integrate with third-party observability tooling.

The details

Most cloud gateways will still provide some form of API metrics and logs (we’ll get to traces in a minute). Kong’s DCGWs are no different here. Konnect Advanced Analytics provides detailed logs and metrics for both API- and AI-specific traffic on your DCGWs. It also allows you to build report templates and custom dashboards tailored to your needs that can give insight into everything from performance to API health to long-term trends around consumption.

However, while other vendors may offer basic API metrics and logs, they often run into two major roadblocks.

Firstly, if you wish to export API and AI metrics, logs, and traces to a third-party observability provider (e.g., Splunk, Datadog, OTEL Collector, etc.), this is either not possible (see most dedicated APIM vendors) or requires first exporting the logs to a proprietary analytics tool resulting in extremely high costs (see most CSPs). Kong supports direct integration of DCGWs with third-party observability providers through the OTEL plugin.

Secondly, observability is often limited to API traffic for dedicated APIM providers’ cloud gateways. This means any issues with the actual gateway require reaching out to customer support and waiting for an export and analysis of the gateway logs. This can dramatically slow down any troubleshooting and time to resolution. Perhaps a minor annoyance for deployment of a custom plugin in a dev env, but could be a serious issue for any challenges faced in production. Thankfully, Kong DCGWs also have you covered here and provide gateway logs directly in Konnect.

The last observability item we need to touch on is traces. Traces provide a detailed record of a request's journey as it travels through various services and components. It typically includes a series of spans, which represent individual operations or steps within that request's lifecycle. Traces help identify performance bottlenecks, latency issues, and dependencies between services.

As previously mentioned, Kong DCGWs fully support exporting traces through the OTEL plugin. However, this can come with some tradeoffs.

First off, to collect and analyze traces, you need an OTEL collector and a visualization tool (e.g., Grafana, Datadog, etc.) which have their own associated costs and complexity. Secondly, even if your organization is already deeply invested in this tooling, it's unlikely that you're collecting detailed tracing on every API request for every single component inside your network stack. Capturing that level of detail is often cost prohibitive. Finally, the team managing the overall observability stack can often be different than the team managing your APIs. While this separation of concerns certainly has benefits, this can lead to delays when trying to resolve an issue occurring at the gateway level.

Thankfully, active tracing provides a powerful solution for DCGWs. Here’s the overview from the active tracing announcement blog:

“With Active Tracing, infrastructure administrators can initiate targeted 'deep tracing' sessions in specific API gateway nodes. During an Active Tracing session, the selected gateway generates detailed, OpenTelemetry-compatible traces for all requests matching the sampling criteria. The detailed spans are captured for the entire request/response lifecycle. These traces can be visualized with Konnect’s built-in span viewer with no additional instrumentation or telemetry tools. Konnect’s Active Tracing capability offers exclusive, in-depth insights that can't be replicated by third-party telemetry tools.”

This is an essential tool for reducing mean time to resolution as teams in Konnect can quickly pinpoint whether an issue is due to anything from a custom plugin to an upstream service to DNS resolution.

APIOps and migrations

The last, but certainly not least, piece of core DCGW functionality we’ll be looking at today is APIOps and automation. Now, when many people hear automation, they immediately think of increased efficiency and faster development cycles. And while this is certainly a benefit, the core driver behind APIOps is stronger governance.

APIOps reduces the human error evident in any click-ops approach through consistent, repeatable, and auditable automation pipelines, and Kong DCGWs' support for APIOps is best in class across dedicated APIM vendors and CSPs.

APIOps with Kong also have the added benefit of dramatically simplifying migrations. We’ll touch on that a bit at the end.

There are four main ways to build out your APIOps in Kong:

  1. Admin API
  2. decK
  3. Terraform provider
  4. Kong Operator

The details

Let’s first break down the two main approaches to APIOps: imperative and declarative.

The imperative approach involves defining step-by-step instructions to modify the API configuration directly. This method, often seen in CLI commands, scripts, or manual updates, provides more granular control but can be harder to maintain in complex environments. This approach can be enabled through Kong’s Admin API and it’s the approach to APIOps most gateway vendors provide.

In contrast, the declarative approach focuses on defining the desired state of the API configuration, where you specify what the end result should look like — such as your routes, services, and policies — and the system automatically handles the steps to achieve that state. While both approaches have their use cases, the declarative model is generally favored for its efficiency and alignment with modern DevOps practices, thereby offering better predictability, version control, and scalability.

While many gateway vendors offer little to nothing to support declarative APIOps, Kong supports it in three distinct flavors: the decK CLI tool (short for declarative configuration), an officially supported Kong Terraform provider, and the Kong Operator with support for managing all Konnect entities as CRDs. This means that regardless of your preferred deployment environment, CI/CD infrastructure, and broad approach to APIOps, Kong’s DCGWs can seamlessly integrate and be managed as code.

Finally, let's quickly touch on how Kong’s APIOps tooling makes migrations dead simple. If you’re already using Kong, decK makes it a breeze to dump the configuration from an existing control plane and immediately sync it to your new control plane running DCGWs. If you’re coming from another solution, decK can convert your OpenAPI specifications to Kong’s declarative configuration format, Terraform manifests, or Kubernetes CRDs. This gives you the power to build out a workflow to quickly start using DCGWs with your existing APIs.

Conclusion

Now, it’s important to remember that DCGWs are just one offering with Kong Konnect’s federated API gateway model. It’s dead simple to manage a mixture of gateway deployment options so each team/business unit/development environment can use what’s best for them.

But clearly, any team looking to benefit from faster time to market, easier scalability, reduced operational overhead, and all the associated cost savings by switching to DCGWs, the time is clearly now.

Developer agility meets compliance and security. Discover how Kong can help you become an API-first company.

Get a DemoStart for Free
Topics:API Gateway
|
AWS
|
Multi Cloud
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance
  • © Kong Inc. 2025