The Evolution of APIs: From the Cloud Age and Beyond (Part 1)

By on December 3, 2021

The Evolution of APIs: From the Cloud Age and Beyond (Part 1)

We live in a digital economy where Application Programming Interfaces (APIs) are foundational elements for businesses to operate and grow. As rightly outlined in a Gartner article, APIs interconnect individual systems that contain data about people, businesses and things, enable transactions, and create new products/services and business models. 

The popularity of APIs has grown significantly in the last decade or so, but the history of APIs stretches back much further. Our two-part eBook series provides a brief history of APIs from the early computing era to today. 

In Part One of our eBook series, we traced the evolution of APIs from early computing up through the early stages of the internet. Download Part Two of this eBook today, where we consider how the shape and development of APIs have changed as we enter the cloud age of the internet. 

Modern APIs of the Cloud Age

Cloud computing has revolutionized API development and deployment. Cloud vendors offer various cloud services through API endpoints. These API endpoints can be accessed through the browser, command-line interface (CLI) tools and SDKs.

Common Types of APIs

In the cloud age, the most common architectures for APIs are RESTful APIs, gRPC and GraphQL. We will discuss each of these API types in brief in the sections below. 


Representational State Transfer (REST) arrived over 20 years ago and was broadly adopted by developers who found SOAP cumbersome to use. APIs that adhere to REST are known as RESTful APIs. To be RESTful, an API adheres to the following requirements:

  • Uniform Interface: Uniform Interface defines the interface between the client and the server. Each resource exposed by the API needs to be identifiable by a resource URI that the client can call. The API response should return a uniform resource representation (such as in a JSON or XML format) to the client, and that representation must have enough information for the client to use if it is to modify/delete the resource on the server.
  • Client-server: The client and the server are independent of each other and unaware of one another’s implementation details.
  • Stateless: The API server will not host any session or state details about a client request.
  • Cacheable: The client can cache a response from the API, while the API server adds the Expires header information to its response, letting the client know if the cached data is valid or stale.
  • Layered system: The client is unaware if it is directly connected to the API server or if it is going through multiple layers of applications (such as load balancing, authentication, transformation).
Figure: Layered system

Figure: Layered system

  • Code-on-demand (optional): Allows a REST API endpoint to return application code (such as JavaScript) to the client.

Although RESTful APIs support multiple message formats (such as HTML, YAML and XML), RESTful APIs predominantly use JSON documents, which are a series of sections with key-value pairs.


OpenAPI is a formal specification for how to define the structure and syntax of a RESTful API. This interface-describing document is both human and machine-readable, which yields the following benefits:

  • Portable format
  • Increases collaboration between development teams
  • Enables automated application development by code-generators
  • Helps with automated test case generation

The Evolution of APIs: From the Cloud Age and Beyond (Part 1)


Google introduced another framework for APIs called gRPC, which uses HTTP/2. A gRPC client can directly call a service method on a gRPC server. The gRPC server implements the service interface—consisting of its methods and parameters and the returned data types—and answers client calls.

In the “The Evolution of APIs: From RPC to SOAP and XML” eBook, we discussed how Remote Procedure Call (RPC) was one of the earliest means of communication between applications running on remote machines. gRPC is a framework for creating RPC-based APIs. gRPC is based on RPC but takes it a step further by adding interoperability with HTTP/2.

Compared to RESTful APIs, gRPC has benefits that include smaller messages sizes, faster communication and streaming connections (client-side, server-side and bidirectional).


GraphQL is a query language and runtime that allows users to query APIs to return the exact data they need. With a RESTful API, clients make multiple calls for data with different parameters appended to the URL. In contrast, GraphQL allows developers to create queries that can fetch all the data needed from multiple sources in a single call.

Loosely Coupled APIs

With APIs functioning as standalone pieces of software not dependent on the rest of the application’s functionality, APIs evolved toward loosely coupled design. This approach ensured API services could be redesigned, rewritten and redeployed without running the risk of breaking other services. Strategies for making an API service loosely coupled include:

  • Employ the use of message queues, which are software components that sit between two applications and help one application communicate with the other asynchronously. For example, API A can send its request to a message queue and then continue with its work. Meanwhile, API B polls the message queue periodically for messages. When it finds the message from API A, API B performs the requested function. Similarly, API B sends the function result to the message queue, and API A can retrieve that result at a later time.
Figure: Two Loosely Coupled APIs Using Message Queues

Figure: Two Loosely Coupled APIs Using Message Queues

  • Delegate the integration between APIs to an API middleware, which ensures the APIs can talk to one another by facilitating aspects such as connectivity logic, translation between message formats and protocols, and authentication/authorization.
  • Build fine-grained APIs. In a coarse-grained application, application functionality spreads across only a few APIs. Instead, these APIs can be broken down further, with each subsequently smaller API performing only a single function. Smaller APIs become easier to develop, test, manage, deploy and upgrade.


Microservices allow a complex application to be broken down into small, independent “services.” Microservices can be written in any language and deployed anywhere, and their functionalities are exposed as APIs. Callers of those APIs might be end-user clients or even other microservices.

Figure: A Service Mesh Running on Hybrid Cloud

Figure: Example of a Microservice Architecture

What makes microservices unique is that they are loosely coupled and independent. In other words, you can change the program code and internal workings of an API within a microservice without touching the entire application. Like APIs, microservices can be written in any language and deployed anywhere. Because microservices are loosely coupled, a single microservice experiencing a spike in load or a failure won’t bring down the entire application.

Serverless Functions

A serverless function is a standalone piece of code that a cloud provider runs on its managed environment, such that the customer (the developer) does not have to worry about infrastructure or scaling. As far as the developer is concerned, there’s no server involved—it’s serverless.

Service Mesh

An application can comprise microservices running on physical machines, virtual servers both on-premise and in the cloud, Docker containers running in Kubernetes pods, or as serverless entities. To communicate with one another, these microservices might connect using direct links, VPNs and trusted virtual private clouds (VPCs) at a physical level. To manage the complexities of network performance, discoverability and connectivity, we have the service mesh.

A service mesh is a dedicated infrastructure layer built into an application to enable its microservices to communicate using proxies. A service mesh takes the service-to-service communication logic from the microservice’s code and moves it to its network proxy. 

The proxy runs in the same infrastructure layer as the service and handles the message routing to other services. This proxy is often called a sidecar because it runs side-by-side with service. The interconnected sidecar proxies from many microservices create the mesh. 

Figure: A Service Mesh Running on Hybrid Cloud

Figure: A Service Mesh Running on Hybrid Cloud

The service mesh offers many advantages to microservices, including observability, secure connections and automated failover.


In this blog post, we’ve looked in particular at the evolution of APIs during this modern cloud age. We considered different types of APIs, the recent explosion of microservices development and the current practice of globally distributed APIs requiring robust east-west connectivity. 

The reality is APIs today are very different from the APIs in the early internet age (2000s), and these changes impact the way we need to prepare ourselves for building, deploying and managing APIs of today and tomorrow. In our upcoming blog, we’ll discuss the future of APIs and how you can prepare for the next big change in the world of APIs. To learn more, download the “The Evolution of APIs: From the Cloud Age and Beyond” eBook today!