• Explore the unified API Platform
        • BUILD APIs
        • Kong Insomnia
        • API Design
        • API Mocking
        • API Testing & Debugging
        • MCP Client
        • RUN APIs
        • API Gateway
        • Context Mesh
        • AI Gateway
        • Event Gateway
        • Kubernetes Operator
        • Service Mesh
        • Ingress Controller
        • Runtime Management
        • DISCOVER APIs
        • Developer Portal
        • Service Catalog
        • MCP Registry
        • GOVERN APIs
        • Metering & Billing
        • APIOps & Automation
        • API Observability
        • Why Kong?
      • CLOUD
      • Cloud API Gateways
      • Need a self-hosted or hybrid option?
      • COMPARE
      • Considering AI Gateway alternatives?
      • Kong vs. Postman
      • Kong vs. MuleSoft
      • Kong vs. Apigee
      • Kong vs. IBM
      • GET STARTED
      • Sign Up for Kong Konnect
      • Documentation
  • Agents
      • FOR PLATFORM TEAMS
      • Developer Platform
      • Kubernetes & Microservices
      • Observability
      • Service Mesh Connectivity
      • Kafka Event Streaming
      • FOR EXECUTIVES
      • AI Connectivity
      • Open Banking
      • Legacy Migration
      • Platform Cost Reduction
      • Kafka Cost Optimization
      • API Monetization
      • AI Monetization
      • AI FinOps
      • FOR AI TEAMS
      • AI Cost Control
      • AI Governance
      • AI Integration
      • AI Security
      • Agentic Infrastructure
      • MCP Production
      • MCP Traffic Gateway
      • FOR DEVELOPERS
      • Mobile App API Development
      • GenAI App Development
      • API Gateway for Istio
      • Decentralized Load Balancing
      • BY INDUSTRY
      • Financial Services
      • Healthcare
      • Higher Education
      • Insurance
      • Manufacturing
      • Retail
      • Software & Technology
      • Transportation
      • See all Solutions
      • DOCUMENTATION
      • Kong Konnect
      • Kong Gateway
      • Kong Mesh
      • Kong AI Gateway
      • Kong Insomnia
      • Plugin Hub
      • EXPLORE
      • Blog
      • Learning Center
      • eBooks
      • Reports
      • Demos
      • Customer Stories
      • Videos
      • EVENTS
      • AI + API Summit
      • Webinars
      • User Calls
      • Workshops
      • Meetups
      • See All Events
      • FOR DEVELOPERS
      • Get Started
      • Community
      • Certification
      • Training
      • COMPANY
      • About Us
      • Why Kong?
      • We're Hiring!
      • Press Room
      • Investors
      • Contact Us
      • PARTNER
      • Kong Partner Program
      • SECURITY
      • Trust and Compliance
      • SUPPORT
      • Enterprise Support Portal
      • Professional Services
      • Documentation
      • Press Releases

        Kong Names Bruce Felt as Chief Financial Officer

        Read More
  • Pricing
  • Login
  • Get a Demo
  • Start for Free
Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Configuring Kong Dedicated Cloud Gateways with Managed Redis in a Multi-Cloud Environment
Engineering
March 12, 2026
7 min read

Configuring Kong Dedicated Cloud Gateways with Managed Redis in a Multi-Cloud Environment

Hugo Guerrero
Principal Tech PMM, Kong

A persistent challenge arises as businesses adopt multicloud architectures and agentic AI: the need for state synchronization. API and AI gateways require a robust persistence layer to synchronize data, whether it's for governing AI token usage, facilitating agent-to-agent communication, or boosting performance through caching.

The Managed Redis cache add-on for Kong Dedicated Cloud Gateways (DCGW) is now generally available (GA) in AWS and Azure. This offering provides enterprises with a fully managed, high-performance shared state layer directly within Kong Konnect, facilitating intelligent, stateful governance across multi-cloud deployments. It functions as a turnkey solution, delivering the speed of an in-memory data store with the simplicity of a Software as a Service (SaaS) product. Crucially, customers gain all these benefits without the burden of infrastructure management.

Modern architectures rarely live in a single cloud. You might deploy an AWS region for primary production traffic, an Azure region for European workloads, and a Google Cloud region for your analytics and AI services. Kong Dedicated Cloud Gateways (DCGW) provide a fully managed data plane that runs in your cloud account while remaining controlled through Kong Konnect. This model enables enterprises to deploy gateways across providers while maintaining centralized governance.

This post walks through configuring Dedicated Cloud Gateways across multiple clouds and regions. It also shows how to enable shared state capabilities using managed caching powered by Redis.

Architecture Overview

A multicloud DCGW architecture typically contains three main layers.

1. Konnect Control Plane

The SaaS control plane manages configuration, plugins, and policies. All gateways connect securely to this layer.

2. Dedicated Cloud Gateway Data Planes

Each cloud environment hosts its own gateway cluster. These run inside the customer's cloud account and process traffic locally.

3. Regional Shared State

Optional managed Redis instances provide caching, rate limiting counters, or AI token governance close to the gateway.

A typical deployment might include:

  • AWS region for primary production traffic
  • Azure region for European workloads
  • Google Cloud region for analytics and AI services

Each environment runs its own DCGW cluster while sharing governance from Konnect.

Fig. Multicloud Dedicated Cloud Gateways Architecture Overview

Prerequisites

Before deploying Dedicated Cloud Gateways in multiple clouds, ensure the following prerequisites are met.

Konnect Setup

  • A Konnect organization and control plane
  • Permissions to create Dedicated Cloud Gateways
  • Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.

Cloud Accounts

Accounts in each provider where gateways will run:

  • Amazon Web Services
  • Microsoft Azure
  • Google Cloud

Networking

Each environment should include:

  • A VPC or virtual network
  • Subnets for the gateway data plane
  • Internet or private connectivity for upstream services

Infrastructure Automation (recommended)

Multicloud deployments are easier to manage using the following:

  • Terraform
  • CI/CD pipelines
  • Infrastructure modules per provider

Step 1: Create a Dedicated Cloud Gateway in Konnect

Kong Dedicated Cloud Gateways provide fully managed, single-tenant infrastructure that abstracts away the operational overhead while giving you control over size and location. Behind the scenes, every DCGW runs in an individual single-tenant cloud environment, ensuring consistent performance, tenant isolation, and strong security boundaries, while the Konnect Control Plane remains multi-tenant.

Start by creating a gateway deployment from the Konnect control plane.

  1. Log in to Konnect.
  2. Navigate to Gateway Manager in the Konnect UI.
  3. Click on New API Gateway from the New dropdown button.
  4. Select Dedicated Cloud and give it a proper name.

Example configuration:

Gateway Name: prod-billing-svcs
Description: Gateway for Billing services in production

Step 2: Deploy Gateways Across Multiple Clouds

  1. Choose your cloud provider and region.
  2. Select the type of access.

This is an example topology configuration to cover a couple of common scenarios:

Fig. Example Topology

Once created, Konnect provisions the infrastructure inside your cloud environment.

Even though each gateway runs in a different cloud, they share configuration through Konnect.

This means:

  • APIs are defined once
  • Policies apply globally
  • Traffic enforcement happens locally

Note: Currently only AWS and Azure are enabled to create managed caches.

Step 3: Configure Networking and Service Connectivity

Each gateway must connect to upstream services running in its cloud environment.

Typical patterns include:

Local Service Routing

Each gateway routes to services deployed in the same cloud.

Example:

AWS Gateway → AWS microservices
Azure Gateway → Azure workloads
GCP Gateway → AI inference services

Benefits:

  • Lower latency
  • Reduced cross cloud traffic
  • Simpler networking

Cross Cloud Routing

Sometimes services live in different clouds.

Example:

Azure Gateway → AWS backend

This can be achieved through:

  • Private connectivity
  • VPN or peering
  • Secure service endpoints

Step 4: Enable Managed Redis for Shared State

Multicloud systems often require shared state across gateway instances.

Examples include:

  • Global rate limiting
  • AI token accounting
  • API response & AI Semantic caching
  • Session management

Dedicated Cloud Gateways can enable managed Redis directly from Konnect.

Enable Redis in Konnect

Managed caches are configured at the control plane level and must be associated with a control plane when they are created. Currently, the size available is small (approximately 1 GiB), but larger sizes are planned for future support. If you require a larger cache immediately, please contact your account team. Managed caches are available across all regions, which facilitates multi-region configurations.

  1. List your existing Dedicated Cloud Gateway control planes:
curl -X GET "https://global.api.konghq.com/v2/control-planes?filter%5Bcloud_gateway%5D=true" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN"

You should get a response similar to the following:

{
  "meta": {
    "page": {
      "total": 3,
      "size": 100,
      "number": 1
    }
  },
  "data": [
    {
      "id": "uuuidnumber",
      "name": "prod-billing-svcs",
      "description": "Gateway for billing services in production",
      "labels": {},
      "config": {
        "control_plane_endpoint": "https://uuuidnumber.us.cp0.konghq.com",
        "telemetry_endpoint": "https://uuuidnumber.us.tp0.konghq.com",
        "cluster_type": "CLUSTER_TYPE_CONTROL_PLANE",
        "auth_type": "pinned_client_certs",
        "cloud_gateway": true,
        "proxy_urls": []
      },
      "created_at": "2026-03-11T22:44:08.466Z",
      "updated_at": "2026-03-11T22:44:08.466Z"
    }
  ]
}

  1. Copy and export the control plane you want to configure the managed cache for:
export CONTROL_PLANE_ID='YOUR CONTROL PLANE ID'
  1. Create a managed cache using the Cloud Gateways add-ons API:
curl -X POST "https://global.api.konghq.com/v2/cloud-gateways/add-ons" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "name": "managed-cache",
       "owner": {
         "kind": "control-plane",
         "control_plane_id": "'$CONTROL_PLANE_ID'",
         "control_plane_geo": "us"
       },
       "config": {
         "kind": "managed-cache.v0",
         "capacity_config": {
           "kind": "tiered",
           "tier": "small"
         }
       }
     }'

It should have a response similar to the following:

{
  "id": "uuuidnumber",
  "name": "managed-cache",
  "owner": {
    "kind": "control-plane",
    "control_plane_id": "uuuidnumber",
    "control_plane_geo": "us"
  },
  "config": {
    "kind": "managed-cache.v0",
    "capacity_config": {
      "kind": "tiered",
      "tier": "small"
    },
    "data_plane_groups": [
      {
        "id": "uuuidnumber",
        "cloud_gateway_network_id": "uuuidnumber",
        "provider": "aws",
        "region": "us-east-1",
        "state": "initializing",
        "state_metadata": {}
      },
      {
        "id": "uuuidnumber",
        "cloud_gateway_network_id": "uuuidnumber9",
        "provider": "azure",
        "region": "germanywestcentral",
        "state": "initializing",
        "state_metadata": {}
      },
      {
        "id": "uuuidnumber",
        "cloud_gateway_network_id": "uuuidnumber",
        "provider": "aws",
        "region": "us-west-1",
        "state": "initializing",
        "state_metadata": {}
      }
    ],
    "state_metadata": {
      "cache_host": "{vault://env/ADDON_MANAGED_CACHE_HOST}",
      "cache_port": "{vault://env/ADDON_MANAGED_CACHE_PORT}",
      "cache_username": "{vault://env/ADDON_MANAGED_CACHE_USERNAME}",
      "cache_config_id": "uuuidnumber",
      "cache_server_name": "{vault://env/ADDON_MANAGED_CACHE_SERVER_NAME}",
      "cloud_authentication": {
        "aws_region": "{vault://env/ADDON_MANAGED_CACHE_AWS_REGION}",
        "auth_provider": "{vault://env/ADDON_MANAGED_CACHE_AUTH_PROVIDER}",
        "aws_cache_name": "{vault://env/ADDON_MANAGED_CACHE_AWS_CACHE_NAME}",
        "azure_tenant_id": "{vault://env/ADDON_MANAGED_CACHE_AZURE_TENANT_ID}",
        "aws_assume_role_arn": "{vault://env/ADDON_MANAGED_CACHE_AWS_ASSUME_ROLE_ARN}"
      }
  },
  "state": "ready",
  "entity_version": 1,
  "created_at": "2026-03-12T02:11:46.797Z",
  "updated_at": "2026-03-12T02:11:46.797Z"
}

Konnect automatically:

  • Provisions the Redis instance
  • Co-locates it with the gateway
  • Handles scaling and maintenance

As Redis is a regional service, each cloud environment will be provisioned with its own dedicated, high-performance cache. Please note that we will pause briefly here, as the deployment of these resources may take up to 15 minutes.

After the managed cache is ready, Konnect automatically creates a Redis partial configuration for you. Use the Redis configuration to set up Redis-supported plugins by selecting the automatically created Konnect-managed Redis configuration.

You should be able to check it under Redis Configurations

Fig. Redis Configuration with Konnect managed cache configuration

Step 5: Configure Plugins That Use Redis

Once Redis is enabled, it can support several Kong plugins. Some plugins in Kong Gateway share common Redis configuration settings that often need to be repeated. Partials, like the one created in the previous step, allow you to reuse shared Redis configurations across plugins.

Fig. Redis Configuration in Kong Konnect

Common examples include:

Rate Limiting

Example configuration:

name: rate-limiting-advanced
partials:
  - id: a07fb159-162c-430d-b4ff-fb09f22c8c3b
config:
  strategy: redis
  throttling:
    enabled: false
    interval: 5
    queue_limit: 5
    retry_times: 3
  window_type: sliding

This ensures traffic limits remain consistent across all gateway instances within that region.

Response Caching

Example configuration:

plugin: proxy-cache
config:
  strategy: redis
  cache_ttl: 300

Benefits:

  • Reduced backend load
  • Faster response times
  • Consistent caching across nodes

AI Consumption Governance

AI gateways often enforce usage limits for models and agents.

Plugins such as AI rate limiting or token quotas can store counters in Redis to track usage across requests.

This is particularly useful when gateways front model providers such as:

  • LLM APIs
  • internal inference services
  • agent orchestration platforms

Note: For those plugins that can’t use Redis partial configuration, you can use env referenceable fields directly using the information from the managed cache state_metadata.

Automate Multicloud Deployments with Terraform

Infrastructure automation makes it easier to replicate gateway environments across clouds.

Example Terraform workflow:

  1. Define a gateway module
  2. Parameterize cloud provider and region
  3. Deploy instances per environment

Example structure:

modules/
  dcgw/
    main.tf
    variables.tf

environments/
  aws-prod/
  azure-eu/
  gcp-ai/

A pipeline can then deploy gateways across providers consistently.

Benefits include:

  • Repeatable infrastructure
  • Version controlled architecture
  • Faster environment provisioning

Apply Global Governance Policies

With multiple gateways deployed, policies should be applied centrally.

Examples:

Security

  • Authentication plugins
  • OAuth or API keys
  • mTLS

Traffic Management

  • Rate limits
  • circuit breakers
  • request transformations

AI Governance

  • token quotas
  • model routing
  • request filtering

All policies are defined once in Konnect and synchronized automatically to each gateway.

Observability Across Multicloud Gateways

Monitoring becomes critical in distributed environments.

Recommended telemetry includes:

  • Request latency
  • API error rates
  • cache hit ratios
  • rate limit violations

Konnect provides centralized analytics while gateways continue to process traffic locally.

This allows teams to identify issues across clouds without operating separate monitoring stacks.

Example Multicloud Deployment Pattern

A production architecture might look like this:

Characteristics of this architecture:

  • Each gateway processes traffic locally
  • Redis provides regional shared state
  • Konnect ensures global configuration consistency

Best Practices for Multicloud Gateways

Keep Traffic Local

Route traffic to services within the same cloud whenever possible to reduce latency.

Use Regional Caches

Deploy Redis per region rather than a single global instance to maintain performance.

Standardize Gateway Configurations

Use infrastructure automation to avoid configuration drift across clouds.

Separate Environments

Use dedicated gateways for:

  • production
  • staging
  • development

This prevents configuration changes from impacting live traffic.

Getting Started

Dedicated Cloud Gateways allow organizations to combine the benefits of SaaS management with the performance and control of running gateways directly in their own cloud environments.

By deploying gateways across providers such as AWS, Azure, and Google Cloud, teams can create resilient multicloud architectures while maintaining consistent governance.

With the addition of managed Redis caching, gateways can evolve from stateless proxies into intelligent infrastructure capable of enforcing rate limits, caching responses, and governing AI workloads.

To get started:

  1. Create a Dedicated Cloud Gateway in Konnect
  2. Deploy additional gateways across cloud providers
  3. Enable managed Redis for shared state
  4. Apply global API and AI governance policies

From there, your API and AI infrastructure can scale across clouds while remaining centrally managed and operationally simple.

Multi CloudKong KonnectGovernanceAPI GatewayAI

More on this topic

Webinars

Quarterly Platform Updates & Roadmap Webinar

eBooks

AI Governance Framework: Shadow AI Discovery & LLM Guardrails

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
Multi CloudKong KonnectGovernanceAPI GatewayAI
Hugo Guerrero
Principal Tech PMM, Kong

Recommended posts

Kong Simplifies Multicloud Cloud Gateways with Managed Redis Cache

Product ReleasesMarch 12, 2026

Managed Redis cache is a turnkey "Shared State" add-on for Kong Dedicated Cloud Gateways. It is designed to combine the performance of an in-memory data store with the simplicity of a SaaS product. When you spin up a Dedicated Cloud Gateway in Kong

Amit Shah

AI Input vs. Output: Why Token Direction Matters for AI Cost Management

EnterpriseMarch 10, 2026

The Shifting Economic Landscape: The AI token economy in 2026 is evolving, and enterprise leaders must distinguish between low-cost input tokens and high-premium output tokens to maintain profitability. Agentic AI Financial Risks: The transition t

Dan Temkin

Modernizing Integration & API Management with Kong and PolyAPI

EngineeringFebruary 9, 2026

The goal of Integration Platform as a Service (iPaaS) is to simplify how companies connect their applications and data. The promise for the first wave of iPaaS platforms like Mulesoft and Boomi was straightforward: a central platform where APIs, sys

Gus Nemechek

An Early Christmas Present for the AI C-Suite: Metering & Billing Comes to Kong Konnect

Product ReleasesDecember 18, 2025

The AI boom has a dirty secret: for most enterprises, it's bleeding money. Every LLM call, every agent invocation, every API request that powers your AI products — they all cost something. And right now, most organizations have no idea what they're

Alex Drag

AI Voice Agents with Kong AI Gateway and Cerebras

EngineeringNovember 24, 2025

Kong Gateway is an API gateway and a core component of the Kong Konnect platform . Built on a plugin-based extensibility model, it centralizes essential functions such as proxying, routing, load balancing, and health checking, efficiently manag

Claudio Acquaviva

Multi-Cloud API and AI Infra Gets Smarter: Managed Redis for Kong DCGW

Product ReleasesSeptember 16, 2025

Global, multi-cloud agentic infrastructure Modern enterprises are embracing multi-cloud strategies to avoid vendor lock-in, optimize costs, and ensure resilience. Yet managing API infrastructure (which also happens to be AI infrastructure) across mu

Alex Drag

Kong Konnect Dedicated Cloud Gateways Add Azure Support

Product ReleasesSeptember 11, 2024

It’s no secret that building global API infrastructure is a daunting task. In April, we announced the general availability of Kong Konnect Dedicated Cloud Gateways — a fully managed, multi-region API management solution that makes setting up globa

Josh Wigginton

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026