Multi-Cloud API and AI Infra Gets Smarter: Managed Redis for Kong DCGW
Reason #1 to attend API Summit 2025? Learn more about global, multi-cloud agentic infrastructure
Modern enterprises are embracing multi-cloud strategies to avoid vendor lock-in, optimize costs, and ensure resilience. Yet managing API infrastructure (which also happens to be AI infrastructure) across multiple cloud providers while maintaining performance and simplicity remains a significant challenge.
We began to solve this problem with the initial introduction of Konnect Dedicated Cloud Gateways (DCGW), which enable organizations to reap the benefits of fully vendor-managed API and AI infrastructure while also reaping all of the benefits of true multi-cloud infra across AWS, Azure, and GCP.
We continue to improve our DCGW offering, and today we're excited to announce an enhancement to Kong's Dedicated Cloud Gateways (DCGWs): fully managed Redis instances that unlock powerful caching and rate-limiting capabilities without operational complexity.
We include more details below, but we highly recommend coming to API Summit 2025 in mid-October to learn more and check out our dedicated session for Konnect Dedicated Cloud Gateways. There you’ll be able to learn more about the entirety of the offering, plus what’s new in this release.
Reserve your spot at API Summit 2025 now!
The multi-cloud advantage
Kong's Dedicated Cloud Gateways already stand alone in the market as the only API management solution offering truly vendor-managed deployments across all three major cloud service providers: AWS, Azure, and GCP. With DCGW, organizations can deploy Kong Gateway infrastructure in over 25 regions globally while maintaining a 99.99% SLA for multi-region deployments that matches or exceeds what major CSPs offer for single-region deployments.
Unlike traditional cloud-native API gateways that lock you into a single provider, DCGWs deliver the best of both worlds: the operational simplicity of fully managed infrastructure with the strategic flexibility of multi-cloud deployment. This addresses the common platform team concern: "We really want to offload the infra side of this, but we're also on a multi-cloud journey, and vendor-managed solutions don't give us the flexibility to deploy in our CSP of choice."
Managed Redis for DCGW is coming: Removing the last infrastructure barriers
The next step for Konnect DCGWs eliminates one of the final friction points in adopting enterprise-grade API management capabilities. Many of Kong Gateway's most powerful plugins — including Rate Limiting, Proxy Caching, and AI Rate Limiting — require Redis for shared state management. Previously, customers needed to provision, manage, and maintain their own Redis infrastructure, creating significant barriers to adoption.
The new DCGW Managed Redis feature will change everything. With an end-of-October launch target, you can expect the following to be available very soon:
- Instant plugin activation: No more infrastructure tickets or Redis expertise requirements. Critical capabilities like caching and rate limiting can now be activated in minutes rather than weeks. The Redis instances are automatically co-located with your gateway data planes in your chosen cloud regions, ensuring optimal performance and minimal latency.
- Enterprise-grade performance: With dedicated Redis instances deployed directly within your gateway's cloud region, you get native performance with sub-millisecond response times. This co-location architecture eliminates the network hops and latency issues that plague external Redis deployments.
- Zero operational overhead: Kong handles all Redis lifecycle management: provisioning, monitoring, scaling, security patches, and maintenance. Configuration is seamlessly integrated into the Konnect APIs and UI, maintaining the same operational simplicity that makes DCGWs powerful.
The AI strategy connection: Why this matters now, and why we’re talking about it at API Summit
As organizations accelerate their AI initiatives, API infrastructure becomes the critical backbone for AI service delivery. The managed Redis capability is particularly crucial for several AI-specific use cases:
AI rate limiting and quota management
AI workloads are expensive and resource-intensive. With managed Redis, organizations can now implement sophisticated rate limiting and quota management across their AI APIs without infrastructure complexity using dedicated cloud deployments of Kong AI Gateway. This enables:
- Token-based billing and consumption tracking
- Model-specific rate limiting to prevent runaway costs
- Fair usage policies across different user tiers
- Burst protection for high-value AI services
Intelligent caching for AI responses
Large Language Model (LLM) calls can be costly and slow. Managed Redis enables sophisticated caching strategies that can dramatically reduce AI infrastructure costs while improving response times:
- Semantic response caching for similar queries
- Model-specific cache policies
- Intelligent cache invalidation based on context freshness
- Cost optimization through reduced LLM API calls
Multi-cloud AI resilience
AI services often require access to different cloud providers' AI capabilities — AWS Bedrock, Azure OpenAI, Google Vertex AI. With DCGW's multi-cloud architecture and managed Redis, organizations can build resilient AI platforms that seamlessly failover between providers while maintaining consistent performance and state management.
Global AI service distribution
AI applications require low-latency access globally. The combination of DCGW's 30+ regional deployments with co-located Redis instances enables organizations to deploy AI services close to users worldwide while maintaining consistent caching and rate-limiting policies.
Real-world impact: From infrastructure burden to business value
This enhancement transforms how platform teams approach API infrastructure. Instead of dedicating engineering resources to Redis operations, teams can focus on building differentiated AI capabilities and business logic.
Before: Complex infrastructure setup requiring cross-team coordination
- Platform team provisions Redis clusters
- Security team implements connection protocols
- Operations team monitors and maintains Redis
- Development teams wait weeks for caching capabilities
After: One-click activation of enterprise capabilities
- Navigate to Konnect Gateway Manager
- Enable managed Redis with a single configuration
- Activate caching and rate-limiting plugins immediately
- Focus engineering resources on AI innovation
Looking forward: The foundation for advanced capabilities
Managed Redis for DCGWs represents more than just operational simplification — it establishes a centralized shared state management layer that enables future advanced capabilities. This foundation supports Kong's roadmap for intelligent, AI-powered API management features that require distributed state coordination.
As AI continues to reshape how organizations build and deliver services, having the right API infrastructure becomes increasingly critical. Kong's Dedicated Cloud Gateways with managed Redis provide the multi-cloud flexibility, enterprise performance, and operational simplicity that AI-first organizations need to succeed.
Want to learn more? Register for API Summit
We'll be talking about Konnect Dedicated Cloud Gateways and larger AI and API strategy topics at API Summit 2025, live in New York City this October. Reserve your spot now to learn more about how to introduce multi-cloud value into your API and AI programs. We look forward to seeing you there
Unleash the power of APIs with Kong Konnect
