To work and live in today’s digital world, we are unquestionably dependent on interconnected applications. These applications might be massive and highly complex, but they’re also constructed from reusable building blocks, which we call an Application Programming Interface—the API.
API adoption is on the rise across all industries. However, APIs aren’t new. They came about from the natural evolution of writing computer software. Understanding the origin and evolution of APIs is foundational to your ability to thrive as a software architect, application developer or IT decision-maker.
In this two-part blog post series, we’ll trace the evolution of APIs from early computing up through the early stages of the internet. In this post, we will look at APIs during the pre-internet computing era as well as during the PC era.
APIs: Building Blocks of Modern Applications
APIs are pieces of software that allow computer programs running on the same computer or on different computers connected over a network to communicate with one another.
An API connects a calling application (the client) and a called application (the service). That “service” might be a web server, a database server or even a monolithic application. The client is unconcerned with the details of the service’s implementation. Instead, the client simply needs to know how to communicate with the API.
The client sends a request to the API endpoint. The API may authenticate the request and perform some additional processing, and then it passes the request to the service. The service performs some operation—often this is fetching or manipulating data—and then returns a response to the API, which sends a response back to the client.
Common API Features
APIs have a standard feature known as a defined interface. This interface includes:
- A URL for reaching the API, which includes a network protocol (for example, HTTP or HTTPS), the hostname and the resource path
- The actions that an API can perform
- A message format for client requests and API responses
- Security requirements to communicate with the API
An API encapsulates the implementation of the service underneath, meaning developers consuming an API don’t need to know its inner workings. A developer doesn’t know if that API is calling other APIs behind the scenes or how data is being used internally.
Because of this encapsulation, the implementation of an API is language-independent, such that the choice of implementation language should not affect the consumer’s ability to invoke the API.
APIs have built-in security checks—protections like firewalls or an API gateway or encryption (such as SSL or TLS) for data in transit. APIs may implement input validation or require authentication for access.
Why Use APIs?
The use of APIs brings several benefits. One clear benefit is reduced time and effort in software development. Developers can use services and data already available, building new features on top. Language, framework and platform independence make it easier for anyone to build APIs for everyone.
API reuse allows developers to offload certain repetitive functions to third-party service providers. For example, user authentication can be achieved by using APIs from Google or Facebook. E-commerce applications can offload payment flows by using APIs from PayPal or Stripe. Leveraging third-party technologies allow businesses to focus on their core product areas, decreasing their time to market.
Some companies develop APIs to monetize them, making them available to paying customers. Other companies make their APIs available to increase brand awareness.
The benefits of developing and using APIs seem clear, but how did the modern computing world stumble upon APIs to begin with? The roots of the API predate the modern personal computer era, stretching back to the early days of computing.
The Era of Pre-internet Computing
The early days of computing saw the progression from large fill-the-whole-run mainframe computers of the 1960s to the PC era of the early 1980s. The evolution of the API followed the evolution of computer programming.
The Pre-PC Era
In the 1960s and 1970s, computer programs were large, monolithic documents of code that you would key into a magnetic disk. As operating systems evolved, so did programming languages. From the earliest programming languages, computing began to see the use of subroutines and functions.
Subroutines broke up code into manageable chunks, and those chunks could be called from the main program. This led to collaboration, as programmers could work together on a program, calling subroutines written by other programmers. Soon after, we had functions, which could accept one or more arguments (input data) and produce a predictable output.
Operating systems began to expose scripting capabilities, which system administrators could use to automate common workflows. Though not full-fledged programming languages, scripting languages had all the elements of high-level programming: looping, conditional branching and user input validation.
The PC Era
With the PC era of the early 1980s and the proliferation of high-level programming languages, commercial software development began to emerge, and the Windows operating system came onto the scene.
Windows was one of the forerunners of the modern, modularized, reusable programming paradigm. It introduced the use of the dynamic link library (DLL) that encapsulated different program modules. Windows-hosted applications could load or unload these DLLs into memory as they ran.
Object-Oriented Programming and Client-Server Computing
The broader adoption of object-oriented programming (OOP) in the latter part of the century helped developers build genuinely reusable components. Programming languages emerged with libraries for interacting with system hardware. Development teams would create class libraries that they could use in subsequent projects.
Alongside the rise of OOP, we saw growing popularity in network-capable operating systems. Internally, companies were beginning to create network-centric applications using the client-server model.
In client-server computing, a server would run in a corporate network. At the same time, user workstations (the clients) would connect to the server over TCP/IP or NetBIOS. This approach led to two new development technologies for client-server programming: Component Object Model (COM) and Distributed COM (DCOM).
Distributed Computing Technology
COM was a specification that enabled inter-process communication (IPC) between software components in the same machine. DCOM, on the other hand, enabled COM components to run across different devices in a network. With DCOM, it was possible to write distributed applications communicating across machine boundaries.
In this blog, we started by looking at common features for APIs and why they’re so integral to software development. We also traced the evolution of APIs during the early computing age. Stay tuned for the next blog post in our series, where we will discuss the evolution of APIs in the early internet age.
Subscribe to Our Newsletter!
Learn more about The Evolution of APIs: From RPC to SOAP and XML.