Kong Enterprise 3.3 delivers enhanced security, usability, and platform reliability. Learn more
Engineering

How to Manage your gRPC Services with Kong

With the 1.3 release, Kong is now able to natively manage and proxy gRPC services. In this blog post, we’ll explain what gRPC is and how to manage your gRPC services with Kong.

What is gRPC?

gRPC is a remote procedure call (RPC) framework initially developed by Google circa 2015 that has seen growing adoption in recent years. Based on HTTP/2 for transport and using Protobuf as Interface Definition Language (IDL), gRPC has a number of capabilities that traditional REST APIs struggle with, such as bi-directional streaming and efficient binary encoding.

While Kong supports TCP streams since version 1.0, and, as such, can proxy any protocol built on top of TCP/TLS, we felt native support for gRPC would allow a growing user base to leverage Kong to manage their REST and gRPC services uniformly, including using some of the same Kong plugins they have already been using in their REST APIs.

Native gRPC Support

What follows is a step-by-step tutorial on how to set up Kong to proxy gRPC services, demonstrating two possible scenarios. In the first scenario, a single Route entry in Kong matches all gRPC methods from a service. In the second one, we have per-method Routes, which allows, for example, to apply different plugins to specific gRPC methods.

Before starting, install Kong Gateway, if you haven’t already.

As gRPC uses HTTP/2 for transport, it is necessary to enable HTTP/2 proxy listeners in Kong. To do so, add the following property in your Kong configuration:

proxy_listen = 0.0.0.0:9080 http2, 0.0.0.0:9081 http2 ssl

Alternatively, you can also configure the proxy listener with environment variables:

KONG_PROXY_LISTEN="0.0.0.0:9080 http2, 0.0.0.0:9081 http2 ssl" bin/kong restart

In this guide, we will assume Kong is listening for HTTP/2 proxy requests on port 9080 and for secure HTTP/2 on port 9081.

We will use the gRPCurl command-line client and the grpcbin collection of mock gRPC services.

Case 1: Single Service and Route

We begin with a simple setup with a single gRPC Service and Route; all gRPC requests sent to Kong’s proxy port will match the same route.

Issue the following request to create a gRPC Service (assuming your gRPC server is listening in localhost, port 15002):

$ curl -XPOST localhost:8001/services \
  --data name=grpc \
  --data protocol=grpc \
  --data host=localhost \
  --data port=15002

Issue the following request to create a gRPC Route:

$ curl -XPOST localhost:8001/services/grpc/routes \
  --data protocols=grpc \
  --data name=catch-all \
  --data paths=/

Using gRPCurl, issue the following gRPC request:

$ grpcurl -v -d '{"greeting": "Kong 1.3!"}' -plaintext localhost:9080 hello.HelloService.SayHello

The response should resemble the following:

Resolved method descriptor:
rpc SayHello ( .hello.HelloRequest ) returns ( .hello.HelloResponse );
Request metadata to send:
(empty)
Response headers received:
content-type: application/grpc
date: Tue, 16 Jul 2019 21:37:36 GMT
server: openresty/1.15.8.1
via: kong/1.2.1
x-kong-proxy-latency: 0
x-kong-upstream-latency: 0
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response trailers received:
(empty)
Sent 1 request and received 1 response

Notice that Kong response headers, such as via and x-kong-proxy-latency, were inserted in the response.

Case 2: Single Service, Multiple Routes

Now we move on to a more complex use-case, where requests to separate gRPC methods map to different Routes in Kong, allowing for more flexible use of Kong plugins.

Building on top of the previous example, let’s create a few more routes, for individual gRPC methods. The gRPC "HelloService" service being used in this example exposes a few different methods, as we can see in its Protobuf definition (obtained from the gRPCbin repository):

syntax = "proto2";
package hello;
service HelloService {
  rpc SayHello(HelloRequest) returns (HelloResponse);
  rpc LotsOfReplies(HelloRequest) returns (stream HelloResponse);
  rpc LotsOfGreetings(stream HelloRequest) returns (HelloResponse);
  rpc BidiHello(stream HelloRequest) returns (stream HelloResponse);
}
message HelloRequest {
  optional string greeting = 1;
}
message HelloResponse {
  required string reply = 1;
}

We will create individual routes for its "SayHello" and "LotsOfReplies" methods.

Create a Route for "SayHello":

$ curl -XPOST localhost:8001/services/grpc/routes \
  --data protocols=grpc \
  --data paths=/hello.HelloService/SayHello \
  --data name=say-hello

Create a Route for "LotsOfReplies":

$ curl -XPOST localhost:8001/services/grpc/routes \
  --data protocols=grpc \
  --data paths=/hello.HelloService/LotsOfReplies \
  --data name=lots-of-replies

With this setup, gRPC requests to the "SayHello" method will match the first Route, while requests to "LotsOfReplies" will be routed to the latter.

Issue a gRPC request to the "SayHello" method:

$ grpcurl -v -d '{"greeting": "Kong 1.3!"}' \
  -H 'kong-debug: 1' -plaintext \
  localhost:9080 hello.HelloService.SayHello

(Notice we are sending a header kong-debug, which causes Kong to insert debugging information as response headers.)

The response should look like:

Resolved method descriptor:
rpc SayHello ( .hello.HelloRequest ) returns ( .hello.HelloResponse );
Request metadata to send:
kong-debug: 1
Response headers received:
content-type: application/grpc
date: Tue, 16 Jul 2019 21:57:00 GMT
kong-route-id: 390ef3d1-d092-4401-99ca-0b4e42453d97
kong-service-id: d82736b7-a4fd-4530-b575-c68d94c3493a
kong-service-name: s1
server: openresty/1.15.8.1
via: kong/1.2.1
x-kong-proxy-latency: 0
x-kong-upstream-latency: 0
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response trailers received:
(empty)
Sent 1 request and received 1 response

Notice the Route ID refers to the first route we created.

Similarly, let’s issue a request to the "LotsOfReplies" gRPC method:

$ grpcurl -v -d '{"greeting": "Kong 1.3!"}' \
  -H 'kong-debug: 1' -plaintext \
  localhost:9080 hello.HelloService.LotsOfReplies

The response should look like the following:

Resolved method descriptor:
rpc LotsOfReplies ( .hello.HelloRequest ) returns ( stream .hello.HelloResponse );
Request metadata to send:
kong-debug: 1
Response headers received:
content-type: application/grpc
date: Tue, 30 Jul 2019 22:21:40 GMT
kong-route-id: 133659bb-7e88-4ac5-b177-bc04b3974c87
kong-service-id: 31a87674-f984-4f75-8abc-85da478e204f
kong-service-name: grpc
server: openresty/1.15.8.1
via: kong/1.2.1
x-kong-proxy-latency: 14
x-kong-upstream-latency: 0
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response trailers received:
(empty)
Sent 1 request and received 10 responses

Notice that the kong-route-id response header now carries a different value and refers to the second Route created in this page.

Note: gRPC reflection requests will still be routed to the first route we created (the "catch-all" route), since the request matches neither SayHello nor LotsOfReplies routes.

Logging and Observability Plugins

As we mentioned earlier, Kong 1.3 gRPC support is compatible with logging and observability plugins. For
example, let’s try out the File Log and Zipkin plugins with gRPC.

File Log

Issue the following request to enable File Log on the "SayHello" route:

$ curl -X POST localhost:8001/routes/say-hello/plugins \
  --data name=file-log \
  --data config.path=grpc-say-hello.log

Follow the output of the log as gRPC requests are made to "SayHello":

$ tail -f grpc-say-hello.log
{"latencies":{"request":8,"kong":5,"proxy":3},"service":{"host":"localhost","created_at":1564527408,"connect_timeout":60000,"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","protocol":"grpc","name":"grpc","read_timeout":60000,"port":15002,"updated_at":1564527408,"write_timeout":60000,"retries":5},"request":{"querystring":{},"size":"46","uri":"/hello.HelloService/SayHello","url":"http://localhost:9080/hello.HelloService/SayHello","headers":{"host":"localhost:9080","content-type":"application/grpc","kong-debug":"1","user-agent":"grpc-go/1.20.0-dev","te":"trailers"},"method":"POST"},"client_ip":"127.0.0.1","tries":[{"balancer_latency":0,"port":15002,"balancer_start":1564527732522,"ip":"127.0.0.1"}],"response":{"headers":{"kong-route-id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","content-type":"application/grpc","connection":"close","kong-service-name":"grpc","kong-service-id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","kong-route-name":"say-hello","via":"kong/1.2.1","x-kong-proxy-latency":"5","x-kong-upstream-latency":"3"},"status":200,"size":"298"},"route":{"id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","updated_at":1564527431,"protocols":["grpc"],"created_at":1564527431,"service":{"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c"},"name":"say-hello","preserve_host":false,"regex_priority":0,"strip_path":false,"paths":["/hello.HelloService/SayHello"],"https_redirect_status_code":426},"started_at":1564527732516}
{"latencies":{"request":3,"kong":1,"proxy":1},"service":{"host":"localhost","created_at":1564527408,"connect_timeout":60000,"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","protocol":"grpc","name":"grpc","read_timeout":60000,"port":15002,"updated_at":1564527408,"write_timeout":60000,"retries":5},"request":{"querystring":{},"size":"46","uri":"/hello.HelloService/SayHello","url":"http://localhost:9080/hello.HelloService/SayHello","headers":{"host":"localhost:9080","content-type":"application/grpc","kong-debug":"1","user-agent":"grpc-go/1.20.0-dev","te":"trailers"},"method":"POST"},"client_ip":"127.0.0.1","tries":[{"balancer_latency":0,"port":15002,"balancer_start":1564527733555,"ip":"127.0.0.1"}],"response":{"headers":{"kong-route-id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","content-type":"application/grpc","connection":"close","kong-service-name":"grpc","kong-service-id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","kong-route-name":"say-hello","via":"kong/1.2.1","x-kong-proxy-latency":"1","x-kong-upstream-latency":"1"},"status":200,"size":"298"},"route":{"id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","updated_at":1564527431,"protocols":["grpc"],"created_at":1564527431,"service":{"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c"},"name":"say-hello","preserve_host":false,"regex_priority":0,"strip_path":false,"paths":["/hello.HelloService/SayHello"],"https_redirect_status_code":426},"started_at":1564527733554}

Notice the gRPC requests were logged, with info such as the URI, HTTP verb, and latencies.

Zipkin

Start a Zipkin server:

$ docker run -d --name zipkin -p 9411:9411 openzipkin/zipkin

Enable the Zipkin plugin on the grpc Service:

curl -X POST localhost:8001/services/grpc/plugins \
    --data name=zipkin \
    --data config.http_endpoint=http://127.0.0.1:9411/api/v2/spans \
    --data config.sample_ratio=1

As requests are proxied, new spans will be sent to the Zipkin server and can be visualized through the Zipkin Index page, which is, by default, http://localhost:9411/zipkin:

To display Traces, click "Find Traces", as shown above. The following screen will list all traces matching the search criteria:

A trace can be expanded by clicking into it:

Spans can also be extended, as displayed below:

Notice that, in this case, it’s a span for a gRPC reflection request.

What’s Next for gRPC support?

Future Kong releases will include support for natively handling Protobuf data, allowing gRPC compatibility with more plugins, such as request/response transformer.

Have questions or want to stay in touch with the Kong community? Join us wherever you hang out:

Star us on GitHub

🐦 Follow us on Twitter

🌎 Join the Kong Community

🍻 Join our Meetups

❓ ️Ask and answer questions on Kong Nation

💯 Apply to become a Kong Champion

 

Share Post

Subscribe to Our Newsletter!

    How to Scale High-Performance APIs and Microservices

    Learn how to make your API strategy a competitive advantage.

    June 20, 2023 8:00 AM (PT) Register Now