The Kong Microservice API Gateway

Kong runs in front of any RESTful API and is extended through plugins, which provide extra functionality and services beyond the core platform.

Scalable

Kong easily scales horizontally by adding more nodes. It supports large and variable workloads with very low latency.

Modular

Extend Kong functionality with plugins that are installed and configured through a RESTful Admin API.

Runs on any infrastructure

Deploy Kong in the cloud, on-premises or in hybrid environments, including single or global datacenter setups.

Kong is deployed on top of reliable technologies like NGINX and Apache Cassandra or PostgreSQL, and provides you with an easy-to-use RESTful API to operate and configure the system.

  • Administer Kong via RESTful API
  • Automate/orchestrate for CI/CD & DevOps
  • Extensible with plugins
  • Create plugins with Lua
  • Implement powerful customizations
  • Integrate with third-party services
  • Choice of Cassandra or PostgreSQL
  • Scales from laptop to global cluster
  • In-memory caching for performance
  • Intercept Request/Response lifecycle
  • Extends underlying NGINX
  • Scriptable via Lua
  • Proven, high-performance foundation
  • HTTP and reverse proxy server
  • Handles low-level operations

Request Workflow

Consider a typical request/response workflow across a client, an API and the Kong microservice API gateway:

Once Kong is running, every client request being made to the API will hit Kong first and then be proxied to the final API. In between requests and responses Kong will execute any installed plugins, extending the API feature set. Kong effectively becomes the entry point for every API request.