What is gRPC?

Most APIs today are accessed over the ubiquitous HTTP protocol and the framework to create these APIs is known as REpresentational State Transfer (REST). These APIs are known as RESTful APIs.

However, if you’ve been working in API development over the last several years, then you have likely heard of gRPC. gRPC is a robust, newer framework for developing APIs and implementing their communication. It’s a CNCF incubation project that aims to provide a modern, open-source framework modeled after Remote Procedure Calls (RPC) that can run anywhere.

Developed by Google and released as an open-source project in 2015, gRPC is gaining popularity in the microservices world as it offers a simpler and more efficient alternative to REST. It’s based on the work done on Stubby (Google’s internal RPC framework). Many of the world’s largest companies, including Netflix, Cisco, Twitter, and Uber, use gRPC in their systems. 

In this article, you’ll learn what gRPC is, how it works, and some of its advantages and disadvantages of gRPC. Finally, you will learn about the Kong API Gateway, which supports gRPC APIs.

What makes up gRPC?  

To understand gRPC, let’s first talk about RPC. RPC is a method of inter-process communication used by software applications to communicate over the network. gRPC is RPC-based and uses HTTP/2 as its transport layer protocol for communication. Being RPC-based, clients can call any service method of the gRPC API—whether it’s running on the local machine or a remote server. This makes it a versatile tool that goes beyond simple GET and PUT requests. 

For the API interface, gRPC uses Protocol Buffers (protobuf)—instead of JSON or XML—as the interface definition language. Developers define a data structure to specify the message format between the client and the server, and the protobuf compiler converts them into a binary format, the protobuf. 

How gRPC works 

To implement gRPC APIs, you create gRPC services with methods that clients can call. You then define a contract between a client and a server. This contract stipulates the procedures the clients can call from the service, the parameters that can be passed to those procedures, and the types of data that will be returned. The client and the server then use this contract to generate code in their respective languages for calling the procedures and exchanging data. This generated code covers all communication details like message serialization, network calls, and error handling.

As mentioned above, gRPC uses HTTP/2 for transport and Protocol Buffers for message serialization. The client creates local objects—or “stubs”—for the API methods and calls those methods locally. The gRPC runtime sends the client requests to the remote server and receives the responses from it.

gRPC API Service Methods

  1. A unary service method takes one input and returns one output. 
  2. A server streaming service method receives one input from the client and sends a stream of outputs. It can also send back multiple outputs as data becomes available. 
  3. Client streaming service methods open a connection to a server, and then when the server acknowledges the stream can begin, the client side can begin sending data until it terminates the stream. 
  4. Bidirectional streaming service methods simultaneously send and receive data streams in both directions.

Each type of method has its unique benefits and use cases. For example, unary methods are ideal for simple RPCs, while server streaming methods are perfect for tasks requiring heavy data processing. Client streaming methods can be used when latency is a concern, while bidirectional streaming methods are perfect for real-time communication. 

Finally, gRPC also makes developing and debugging distributed systems easier—with support for synchronous and asynchronous RPC calls and the ability to generate metadata about the services it exposes.

Synchronous calls wait for the remote server to return a response before continuing. Asynchronous calls return immediately and the response is handled as a separate task. 

You can use the gRPC metadata to describe services and methods. This metadata helps clients find the right services and their methods, and developers can also use them to validate calls. 

gRPC channels allow simultaneously sending and receiving multiple calls. This can improve performance by enabling the server to handle multiple requests simultaneously. 

Why use gRPC?

There are several reasons you may want to use the gRPC framework for developing your APIs.

Broad language support

First of all, gRPC has broad language support. It’s widely supported in most modern languages and frameworks, including Java, Ruby, Go, Node.js, Python, C#, and PHP. As mentioned above, gRPC clients can invoke any function—not just GET and PUT—making it more versatile than traditional APIs. 

Smaller message size

gRPC messages are smaller than traditional RESTful API messages because the binary message formats—Protocol Buffers—are smaller and faster to parse than text-based formats like JSON. This results in faster transmission between the client and the server. 

Faster communication

HTTP/2 is more efficient than older protocols like HTTP/1.1—allowing gRPC to reduce network bandwidth usage and decrease latency. Also, since the messages are smaller, they can be transferred more quickly between servers and clients. This also helps to reduce the load on the network and provides a smoother user experience.

Streaming Connection

Another advantage of gRPC is its support for streaming connection mode. Streaming mode sends or receives data in chunks, which can improve performance when the data is too large to send or receive at once. This allows clients to continuously receive data from the server without waiting for the entire response to arrive. As a result, users don’t have to wait until all the data transfer is complete. 

During streaming, the connection between the client and the server is maintained, meaning no data is lost or corrupted while transmitted. gRPC streaming is ideal for real-time applications like chat or gaming. 

Pluggable Support

Finally, gRPC supports plugging in load balancing, tracing, health checking, and authentication. This makes it easy to set up and manage high-performance systems. gRPC is modular, so it’s simple to set up and configure its different feature sets.

What are the drawbacks of gRPC?

Overall, gRPC is a powerful tool that offers many benefits for your applications. However, there are some drawbacks as well. One disadvantage is that application errors are difficult to debug. Also, since it’s new, there may be some lack of support from third-party vendors. However, the advantages can far outweigh such drawbacks. 

Conclusion

Today, you learned about the core components of the gRPC framework and saw how it works. Overall, gRPC offers a fast, reliable, and efficient way to handle remote procedure calls. If you’re looking for a faster and more efficient way to build web services, then consider gRPC. 

Kong, the world’s most popular API Gateway, has native support for gRPC, making it an excellent choice for modern organizations that want to try newer API technologies. As a robust API Gateway, Kong provides many features, including authentication, rate limiting, logging, and monitoring. It can secure, manage, and extend gRPC-based APIs, and its rich set of plugins allows custom operations like gRPC service access via REST or browser-based clients. 

To see how Kong can help manage your gRPC API fleet, request a personalized demo.