A PNW-based Fashion Retailer Tailors Customer Experience Across Channels with Kong
One of North America’s largest fashion retailers, with 350+ stores and a growing digital business, builds an API-first foundation to deliver connected customer experiences.
scalability during peak retail surges
developer adoption of self-service pipelines
reliability across multi-region retail operations

An American upmarket department store chain headquartered in the Pacific Northwest, known for its exceptional high-end fashion, customer service, and omnichannel experience through modern, technology-driven innovation.
A century of customer service, reinvented for the digital era
Founded in 1901, this retailer has long been recognized for blending high-touch service with innovative retail experiences. Today, that legacy extends across 350+ stores and a thriving digital platform that drives millions of transactions daily across its website, apps, and fulfillment systems.
Behind every online order, recommendation, or checkout event sits a sophisticated API ecosystem: thousands of APIs powering everything from inventory visibility to personalized offers. For their platform engineering teams, maintaining speed, security, and reliability at that scale isn’t just a technical mandate — it’s part of the brand’s identity.
Five years ago, the company’s API infrastructure wasn’t built for that level of elasticity. The company relied on a SaaS-based API gateway that struggled under unpredictable, event-driven loads. Teams were forced into firefighting mode, waiting on vendors to scale infrastructure and managing incidents in real time.
“Anytime traffic spiked, we’d call the vendor asking, ‘Did you rescale like we requested?’” Pocholo Mangubat, Senior Technical Program Manager, said. “Sometimes they did, sometimes not fast enough — either way, it wasn’t sustainable.”
The result was as anticipated: latency under load, duplicated operational patterns, and inconsistent failover strategies across hundreds of teams. The company needed to mature its operational model that could turn reactive scaling into proactive resilience.
“We process roughly 15,000 requests per second on a normal day. But during events like flash sales, a limited-edition sneaker drop, or a viral campaign, traffic can surge fivefold within seconds. Every millisecond of delay directly affects customer experience and sales.”
From fragmented operations to predictable reliability
The company’s engineering leadership outlined two parallel goals: fix the past and build the future.
Fixing the past meant addressing immediate pain points.
eliminating reliance on public gateways
implementing zero-trust security
reducing manual scaling and incident management
enabling platform-level resiliency so app teams didn’t have to rebuild it for themselves
Building the future meant rearchitecting around cloud native, containerized workloads that would be portable across AWS, GCP, and Azure, ensuring long-term flexibility and cost control.
“We didn’t want to be tied to a single cloud provider,” Mangubat said. “We needed our abstractions to work anywhere, and that meant standardizing on Kubernetes and automating everything around it.”
The technical debt of legacy systems also created blind spots in observability. Each team managed its own routing and monitoring, leading to inconsistent visibility and response times when APIs struggled.
“It wasn’t just scaling. There was operational complexity everywhere. Each team handled health checks differently. Some didn’t even have failovers. We needed a foundation that built reliability by default.”
Building a developer-first, zero-trust platform
The platform team overhauled its API architecture around a guiding principle: make developers’ lives easier.
“We wanted to be like a Chuck E. Cheese birthday party — all-inclusive, stress-free," Mangubat said. "Developers just bring their code; the platform handles everything else.”
The first milestone was replacing the SaaS gateway with a self-hosted Kong Gateway. The shift immediately gave the team full control over scalability and latency.
The immediate results included:
Elastic scalability: Kong automatically scaled to match spikes in customer traffic.
Zero-trust enforcement: Internal traffic was fully secured inside the company’s private network, ensuring no more calls traversing the public internet.
Native observability: Metrics and traces integrated seamlessly into their existing dashboards, allowing teams to visualize API and service behavior side by side.
“Before, latency used to rise during flash sales. With Kong, that problem disappeared. We could handle Solstice-level spikes without blinking,” Mangubat said.
"Moving to self-hosted Kong gave us control over our destiny. Our business teams could finally run aggressive promotions without worrying about traffic spikes.”
Simplifying developer onboarding through templates
To make adoption frictionless, the team templatized configurations by pre-packaging authentication, routing, and resiliency patterns for developers.
“Developers just fill out 10 to 15 inputs and the pipeline does the rest,” Mangubat explained. “Security, observability, dashboards — it’s all built in.”
This self-service model helped shift their engineering culture from ticket-driven dependency to developer-led autonomy.
Adopting Kubernetes and Kong Ingress Controller
As the company standardized on Kubernetes for compute, they implemented Kong Ingress Controller to simplify networking and traffic management.
“Developers shouldn’t have to care about load balancers or TLS certificates,” said Sampath Narra, Senior Software Engineer. “With Kong Ingress and a lightweight NGINX sidecar, we automated all of it.”
Certificate management (once a major pain point) was now fully automated, freeing application teams from manual renewals and complex setup.
Safer deployments with Argo and canary releases
The team paired Argo CD with Kong Ingress Controller for progressive delivery. Every new deployment gives birth to a fresh set of pods with a unique HTTP canary header, allowing teams to test new releases against production traffic safely.
“Developers can validate a deployment under real traffic," Narra said. "If something goes wrong, rollback is instant, and users never notice.”
This fail-fast pattern boosted confidence across hundreds of application teams, significantly reducing production incidents.
Building multi-region resiliency
To achieve true enterprise reliability, the company built an active-active, multi-region deployment model with Kong Gateways in both East and West regions.
Traffic is routed through the nearest gateway using DNS-based load balancing and health-checked upstream targets. If a region degrades, traffic automatically shifts to the other, ensuring availability and minimal latency.
“A shopper in San Francisco hits the West region; someone in New York goes East,” Narra said. “If one region fails, Kong reroutes instantly. No one even notices.”
This architecture not only improved uptime but also reduced cross-region costs by keeping requests local whenever possible.
Standardizing pipelines for continuous delivery
The team also built a unified CI/CD pipeline integrating Kong Gateway, Ingress Controller, TLS sidecars, and observability tooling. This pipeline:
Automates builds, tests, and deployments
Generates dashboards and alerts automatically, and
Enforces platform-wide engineering and security guardrails
With as few as 10–15 configuration inputs, developers now ship services confidently and consistently — from code to production just in hours.
“We reduced public exposure, improved resilience, and built reliability directly into our DNA. That’s operational maturity.”
Operational maturity, delivered
The company’s transformation has redefined its reliability posture and developer experience. The tangible outcomes include:
5X scalability: Seamless API performance even during massive event-driven traffic surges
Zero public exposure: Full implementation of zero-trust principles — all internal traffic now private and authenticated
85% developer adoption: Platform automation embraced across nearly all application teams
Faster releases: Time to production reduced from days to hours, enabling faster experimentation
Improved reliability: Dramatic reduction in incident detection (MTTD) and recovery (MTTR) through observability integration
Standardized excellence: Guardrails and best practices embedded at the platform level — not enforced by policy, but enabled by design
Beyond infrastructure, this transformation has elevated the company’s engineering culture. Platform teams are now enablers, not gatekeepers, delivering a developer experience that mirrors the company’s customer experience philosophy.
“It’s about developer happiness. If we make it easy and safe for them to innovate, they’ll create incredible things for our customers.”
Next up: From gateway to service mesh
The next phase is service mesh, evolving from centralized gateway governance to fully distributed service-to-service connectivity.
“Our future is mesh,” said Mangubat. “Kong will handle north-south traffic as the mesh entry point, and east-west communication will live within the mesh itself. That’s the next level of reliability and autonomy.”
The company’s journey from reactive scaling to predictable, automated resilience now serves as a model for modern retailers aiming to balance agility with reliability.