Engineering
February 24, 2022
7 min read

API Gateway Cache With Kong’s Proxy Cache Plugin

Viktor Gamov

In applications built on a system of microservices, developers should always be on the lookout for opportunities to eliminate unnecessary use of resources, such as database queries, network hops or service requests. API gateway cache (or response caching) is an excellent place to start.

For many microservices, identical requests sent within a window of time will yield identical responses. For example, consider a request to an Orders API for the list of orders submitted yesterday. The first request should be processed, and any necessary services or database queries should be called, but the final response should be cached. Any subsequent requests for the rest of the day should simply return the cached result, thereby saving resources.

If Kong Gateway fronts your upstream services, you can access a reverse proxy cache implementation through the Proxy Cache plugin. This post will walk through setting up and using the plugin, demonstrating response caching as Kong Gateway sits in front of a simple API server.

Let's start with a quick overview of some core tech concepts for this walkthrough.

Kong Gateway

Kong Gateway is a powerful and flexible API gateway optimized for microservices and distributed architectures. It sits in front of your upstream services and can handle authentication, load balancing, traffic control, transformations and other cross-cutting concerns through its rich library of plugins.

Webinar: Scaling High-Performance APIs and Microservices

Reverse Caching

Reverse caching (also known as reverse proxy caching) is a caching implementation in which a dedicated caching application (the reverse proxy) sits in front of the service to be cached. Requests to the service first go through the reverse proxy, which then decides whether to forward the request to the service or respond to the request with a cached response. The decision to return a cached response or forward for a fresh response depends on cache settings, such as the time-to-live (TTL).

Proxy Cache Plugin

The Proxy Cache plugin for Kong Gateway is a built-in plugin that can be enabled by configuration, essentially giving Kong Gateway the role of the reverse proxy.

Overview of Our Mini-Project

To demonstrate the Proxy Cache plugin, we'll build a simple Node.js Express API server with a single endpoint. The endpoint serves up a random programming quote returned by the Programming Quotes API. The server also logs a statement to the console every time its endpoint is hit.

We will use Kong Gateway to sit in front of our service, but we'll set up two separate routes—one with caching and one without caching—which both forward to our API server.

When we send requests to the /quote

path, which is our uncached route, Kong Gateway will simply forward those requests to our API server.

Requests to the /quote-of-the-minute path, which is our cached route, will also be forwarded to our API server when necessary. We'll enable the Proxy Cache plugin for this route, configuring Kong Gateway to cache the response for one minute. Subsequent requests to this path will return the cached response until a minute has passed, which is when the cache expires. Then, we will hit the server endpoint again to retrieve a fresh response.

Webinar: How to Protect your Mission Critical APIs and Services Efficiently

Set Up Node.js Express Server

Let's start by building our API server, which fetches and returns a random programming quote. First, we create a project folder, initialize a project with yarn, and then add our dependencies:

With our project initialized, we create a new file called index.js. The contents of the file are as follows:

Let's briefly explain what happens in this file:

  1. Initialize package constants and prepare an Express server called app.
  2. Initialize a requestCount counter variable.
  3. Set the server to listen for GET requests on the / path, which will trigger the following:
    1. Increment the request counter.
    2. Log the endpoint hit to the console.
    3. Use axios to send a request to the Programming Quotes API.
    4. Retrieve the author and the quote from the axios response data.
    5. Send a response with text containing the quote and author.
  4. Set the server to listen on port 8080.

In our terminal, we run node index.js to start our API server.

In a separate terminal, we use curl to send several requests to our API server.

Looking back at our terminal window with the API server running, this is what we see:

Excellent. Our API server is running as expected. We'll restart it to reset the request counter, and we'll leave it running in our terminal window. Now, it's time to set up Kong Gateway.

Set Up Kong Gateway

The exact steps for installing Kong Gateway will depend on your platform and environment. After you've installed Kong Gateway, we have a few additional setup steps to take.

Create an Initial Declarative Configuration File

For this particular project, as we use the Proxy Cache plugin, we can configure Kong Gateway with a DB-less declarative approach. That means we can establish all of our configuration upfront in a declarative YAML file, which Kong Gateway will read when it starts.

In your project folder, create an initial declarative configuration file with the following command:

This will generate a kong.yml file. So far, your project folder should look like this:

We'll return to our kong.yml file shortly.

Set Up kong.conf for DB-less Configuration

The kong.conf file is the main configuration file that Kong looks to for startup options. When you first install Kong Gateway, you'll see a file called kong.conf.default in the /etc/kong folder. Copy that file to a new file called kong.conf. Then, make the following edits to kong.conf:

Now, upon startup, Kong will look to our project's declarative configuration YAML file.

Configure Upstream Service and Uncached Route

Let's return to our declarative configuration file to set up an upstream service—that's our API server—and a route. We edit the kong.yml file in our project folder so that it looks like this:

In our declarative configuration file, we've set up an upstream service (called quote-service) that points to the URL of our API server (http://localhost:8080). Next, we've set up a route to have Kong listen for requests on the /quote path. Kong will forward those requests to our upstream service.

With our configuration in place, we can start Kong:

Send a Test Request to Uncached Path

Next, we can send a request to our Kong proxy server, to the /quote path:

Great! It looks like Kong Gateway forwarded our request to our API server, and we've received the expected response. When we look at the terminal window with our API server running, this is what we see:

Everything is running as expected.

Configure Cached Route With Plugin

Next, we'll add another route to our declarative configuration file, and we'll enable the Proxy Cache plugin on that route. We edit kong.yml so that it looks like the following:

Notice that we have added another route, called quote-route-with-cache. Kong will listen for requests on the /quote-of-the-minute path and forward those requests—just like it does for the /quote path—to our upstream service.

In addition, we've added a plugin. The name of this plugin is proxy-cache, and we've enabled it specifically on the route called quote-route-with-cache. We configure this plugin with a TTL of 60 seconds.

Since we have updated our declarative configuration, we need to restart Kong:

Send a Test Request to Cached Path

Now is the moment of truth. With our Proxy Cache plugin in place, this is what we expect to happen:

  • When we send multiple requests to the /quote-of-the-minute path, we should receive the same programming quote response each time, as long as we send all of those requests within the window of a minute.
  • The API server should only output a single console message that it received a hit. This is because Kong should only forward the first request to our API server and then use the cached response for all subsequent requests.
  • If we wait until the one-minute window passes, our next request will receive a different programming quote in response.

This is what it looks like when we send our requests:

When we sent four requests in rapid succession, we received the same response. Looking at the terminal window for our API server, we see that the request counter has only incremented one time, despite our four calls:

After waiting for a minute, we send more requests to the /quote-of-the-minute path.

As expected, our first new request receives a new programming quote as a response. The subsequent two requests receive the same quote again, which is the cached result.

When we check our API server window, we see that the request counter has, again, only incremented one time:

Our Proxy Cache plugin is working exactly as expected!

Technical Guide: Secure Your Web, Mobile Applications and APIs using the Kong Gateway

Additional Use Cases

In our demo, we enabled the Proxy Cache plugin on a specific route. However, we could enable the plugin on an entire service or on a consumer, which is a specific user. For example, we can also enable the plugin on the combination of a consumer and a route, which would narrow the plugin’s scope.

In our demo example, the response to /quote-of-the-minute would be the same for all users sending requests within the one-minute window. If we enabled the plugin at the consumer level instead, with each consumer authenticating with a unique API key or JWT, each user would have their own "quote of the minute" cached, and that quote would not be the same as what a different user gets cached.

Conclusion

Response caching for your microservices is a simple and effective tactic for optimization. By using a reverse proxy to decide whether to handle requests by forwarding for a fresh response or by using the cache, you can reduce the load on your network and your services. With Kong Gateway, getting up and running with response caching is quick and simple with the Proxy Cache plugin.