Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. API Gateway Cache With Kong’s Proxy Cache Plugin
Engineering
February 24, 2022
7 min read

API Gateway Cache With Kong’s Proxy Cache Plugin

Viktor Gamov

In applications built on a system of microservices, developers should always be on the lookout for opportunities to eliminate unnecessary use of resources, such as database queries, network hops or service requests. API gateway cache (or response caching) is an excellent place to start.

For many microservices, identical requests sent within a window of time will yield identical responses. For example, consider a request to an Orders API for the list of orders submitted yesterday. The first request should be processed, and any necessary services or database queries should be called, but the final response should be cached. Any subsequent requests for the rest of the day should simply return the cached result, thereby saving resources.

If Kong Gateway fronts your upstream services, you can access a reverse proxy cache implementation through the Proxy Cache plugin. This post will walk through setting up and using the plugin, demonstrating response caching as Kong Gateway sits in front of a simple API server.

Let's start with a quick overview of some core tech concepts for this walkthrough.

Kong Gateway

Kong Gateway is a powerful and flexible API gateway optimized for microservices and distributed architectures. It sits in front of your upstream services and can handle authentication, load balancing, traffic control, transformations and other cross-cutting concerns through its rich library of plugins.

Webinar: Scaling High-Performance APIs and Microservices

Reverse Caching

Reverse caching (also known as reverse proxy caching) is a caching implementation in which a dedicated caching application (the reverse proxy) sits in front of the service to be cached. Requests to the service first go through the reverse proxy, which then decides whether to forward the request to the service or respond to the request with a cached response. The decision to return a cached response or forward for a fresh response depends on cache settings, such as the time-to-live (TTL).

Proxy Cache Plugin

The Proxy Cache plugin for Kong Gateway is a built-in plugin that can be enabled by configuration, essentially giving Kong Gateway the role of the reverse proxy.

Overview of Our Mini-Project

To demonstrate the Proxy Cache plugin, we'll build a simple Node.js Express API server with a single endpoint. The endpoint serves up a random programming quote returned by the Programming Quotes API. The server also logs a statement to the console every time its endpoint is hit.

We will use Kong Gateway to sit in front of our service, but we'll set up two separate routes—one with caching and one without caching—which both forward to our API server.

When we send requests to the /quote

path, which is our uncached route, Kong Gateway will simply forward those requests to our API server.

Requests to the /quote-of-the-minute path, which is our cached route, will also be forwarded to our API server when necessary. We'll enable the Proxy Cache plugin for this route, configuring Kong Gateway to cache the response for one minute. Subsequent requests to this path will return the cached response until a minute has passed, which is when the cache expires. Then, we will hit the server endpoint again to retrieve a fresh response.

Webinar: How to Protect your Mission Critical APIs and Services Efficiently

Set Up Node.js Express Server

Let's start by building our API server, which fetches and returns a random programming quote. First, we create a project folder, initialize a project with yarn, and then add our dependencies:

With our project initialized, we create a new file called index.js. The contents of the file are as follows:

Let's briefly explain what happens in this file:

  1. Initialize package constants and prepare an Express server called app.
  2. Initialize a requestCount counter variable.
  3. Set the server to listen for GET requests on the / path, which will trigger the following:
    1. Increment the request counter.
    2. Log the endpoint hit to the console.
    3. Use axios to send a request to the Programming Quotes API.
    4. Retrieve the author and the quote from the axios response data.
    5. Send a response with text containing the quote and author.
  4. Set the server to listen on port 8080.

In our terminal, we run node index.js to start our API server.

In a separate terminal, we use curl to send several requests to our API server.

Looking back at our terminal window with the API server running, this is what we see:

Excellent. Our API server is running as expected. We'll restart it to reset the request counter, and we'll leave it running in our terminal window. Now, it's time to set up Kong Gateway.

Set Up Kong Gateway

The exact steps for installing Kong Gateway will depend on your platform and environment. After you've installed Kong Gateway, we have a few additional setup steps to take.

Create an Initial Declarative Configuration File

For this particular project, as we use the Proxy Cache plugin, we can configure Kong Gateway with a DB-less declarative approach. That means we can establish all of our configuration upfront in a declarative YAML file, which Kong Gateway will read when it starts.

In your project folder, create an initial declarative configuration file with the following command:

This will generate a kong.yml file. So far, your project folder should look like this:

We'll return to our kong.yml file shortly.

Set Up kong.conf for DB-less Configuration

The kong.conf file is the main configuration file that Kong looks to for startup options. When you first install Kong Gateway, you'll see a file called kong.conf.default in the /etc/kong folder. Copy that file to a new file called kong.conf. Then, make the following edits to kong.conf:

Now, upon startup, Kong will look to our project's declarative configuration YAML file.

Configure Upstream Service and Uncached Route

Let's return to our declarative configuration file to set up an upstream service—that's our API server—and a route. We edit the kong.yml file in our project folder so that it looks like this:

In our declarative configuration file, we've set up an upstream service (called quote-service) that points to the URL of our API server (http://localhost:8080). Next, we've set up a route to have Kong listen for requests on the /quote path. Kong will forward those requests to our upstream service.

With our configuration in place, we can start Kong:

Send a Test Request to Uncached Path

Next, we can send a request to our Kong proxy server, to the /quote path:

Great! It looks like Kong Gateway forwarded our request to our API server, and we've received the expected response. When we look at the terminal window with our API server running, this is what we see:

Everything is running as expected.

Configure Cached Route With Plugin

Next, we'll add another route to our declarative configuration file, and we'll enable the Proxy Cache plugin on that route. We edit kong.yml so that it looks like the following:

Notice that we have added another route, called quote-route-with-cache. Kong will listen for requests on the /quote-of-the-minute path and forward those requests—just like it does for the /quote path—to our upstream service.

In addition, we've added a plugin. The name of this plugin is proxy-cache, and we've enabled it specifically on the route called quote-route-with-cache. We configure this plugin with a TTL of 60 seconds.

Since we have updated our declarative configuration, we need to restart Kong:

Send a Test Request to Cached Path

Now is the moment of truth. With our Proxy Cache plugin in place, this is what we expect to happen:

  • When we send multiple requests to the /quote-of-the-minute path, we should receive the same programming quote response each time, as long as we send all of those requests within the window of a minute.
  • The API server should only output a single console message that it received a hit. This is because Kong should only forward the first request to our API server and then use the cached response for all subsequent requests.
  • If we wait until the one-minute window passes, our next request will receive a different programming quote in response.

This is what it looks like when we send our requests:

Kong Gateway Proxy Cache Plugin

When we sent four requests in rapid succession, we received the same response. Looking at the terminal window for our API server, we see that the request counter has only incremented one time, despite our four calls:

After waiting for a minute, we send more requests to the /quote-of-the-minute path.

Kong Gateway Proxy Cache 2

As expected, our first new request receives a new programming quote as a response. The subsequent two requests receive the same quote again, which is the cached result.

When we check our API server window, we see that the request counter has, again, only incremented one time:

Our Proxy Cache plugin is working exactly as expected!

Technical Guide: Secure Your Web, Mobile Applications and APIs using the Kong Gateway

Additional Use Cases

In our demo, we enabled the Proxy Cache plugin on a specific route. However, we could enable the plugin on an entire service or on a consumer, which is a specific user. For example, we can also enable the plugin on the combination of a consumer and a route, which would narrow the plugin’s scope.

In our demo example, the response to /quote-of-the-minute would be the same for all users sending requests within the one-minute window. If we enabled the plugin at the consumer level instead, with each consumer authenticating with a unique API key or JWT, each user would have their own "quote of the minute" cached, and that quote would not be the same as what a different user gets cached.

Conclusion

Response caching for your microservices is a simple and effective tactic for optimization. By using a reverse proxy to decide whether to handle requests by forwarding for a fresh response or by using the cache, you can reduce the load on your network and your services. With Kong Gateway, getting up and running with response caching is quick and simple with the Proxy Cache plugin.

API GatewayKong GatewayPlugins

More on this topic

Videos

Leveraging Kong for Secure Healthcare Interoperability

Videos

Kong Builders Nov 16- Introducing Kong Gateway Operator

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
API GatewayKong GatewayPlugins
Share on Social
Viktor Gamov

Recommended posts

Building a Kong Gateway Plugin with JavaScript

Kong Logo
EngineeringMay 26, 2021

We recently sat down to discuss the language for the next Kong Gateway Plugin Development Kit (PDK). Given the number of JavaScript developers in the world and the variety of libraries and debugging tools available, there was only one logical choi

Michael Heap

Reworked Plugin Queues in Kong Gateway 3.3

Kong Logo
Product ReleasesMay 25, 2023

Starting with the 3.3 release, Kong Gateway includes a new implementation of the internal queues that are used by several plugins to decouple the production of data in the proxy path and its submission to a receiving server, such as a log server. We

Hans Hübner

How to Track Service Level Objectives with Kong and OpenTelemetry

Kong Logo
EngineeringFebruary 6, 2025

In this blog post, we will explore how organizations can leverage Kong and OpenTelemetry to establish and monitor Service Level Objectives (SLOs) and manage error budgets more effectively. By tracking performance metrics and error rates against pred

Sachin Ghumbre

Kong Konnect EKS Marketplace Add-on for Kong Gateway Data Planes

Kong Logo
EngineeringDecember 7, 2023

Today, we’re excited to release the Kong Konnect EKS Marketplace add-on as a means to deploy your Kong Gateway dataplanes in AWS. The add-ons are a step forward in providing fully managed Kubernetes clusters. It is here to simplify the post-procurem

Danny Freese

Gateway API: From Early Years to GA

Kong Logo
EngineeringNovember 7, 2023

In the Kubernetes world, the Ingress API has been the longstanding staple for getting access to your Services from outside your cluster network. Ingress has served us well over the years and can be found present in several dozen different implementa

Shane Utt

Governing GraphQL APIs with Kong Gateway

Kong Logo
EngineeringOctober 20, 2023

Modern software design relies heavily on distributed systems architecture, requiring all APIs to be robust and secure. GraphQL is no exception and is commonly served over HTTP, subjecting it to the same management concerns as any REST-based API. In

Rick Spurgeon

Using Kong Gateway to Adapt SOAP Services to the JSON World

Kong Logo
EngineeringSeptember 6, 2023

While JSON-based APIs are ubiquitous in the API-centric world of today, many industries adapted internet-based protocols for automated information exchange way before REST and JSON became popular. One attempt to establish a standardized protocol sui

Hans Hübner

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2025