Frequently Asked Questions

What is Kong?

Kong makes connecting APIs and microservices across hybrid or multi-cloud environments easier and faster than ever. We power trillions of API transactions for leading organizations globally through our end-to-end API platform. 

Kong Gateway is the world’s most popular open source API gateway, built for multi-cloud and hybrid, and optimized for microservices and distributed architectures.  It is built on top of a lightweight proxy to deliver unparalleled latency, performance and scalability for all your microservice applications regardless of where they run. It allows you to exercise granular control over your traffic with Kong’s plugin architecture

The Kong Enterprise Service Control Platform brokers an organization’s information across all services. Built on top of Kong’s battle-tested open source core, Kong Enterprise enables customers to simplify management of APIs and microservices across hybrid-cloud and multi-cloud deployments. With Kong Enterprise, customers can proactively identify anomalies and threats, automate tasks, and improve visibility across their entire organization.

Why use Kong?

Compared to other API gateways and platforms, Kong has many important advantages that are not found in the market today. Choose Kong to ensure your API gateway platform is:

  • Radically Extensible
  • Blazingly Fast
  • Open Source
  • Platform Agnostic
  • Manages the full API lifecycle
  • Cloud Native
  • RESTful

The full set of Kong functionality is described in the publicly available documentation.

How does Kong work?

Kong server

The Kong Server, built on top of NGINX, is the server that will actually process the API requests and execute the configured plugins to provide additional functionalities to the underlying APIs before proxying the request upstream.
Kong listens on several ports that must allow external traffic and are by default:

  • 8000 - for proxying. This is where Kong listens for HTTP traffic. See proxy_listen.
  • 8443 - for proxying HTTPS traffic. See proxy_listen_ssl.

Additionally, those ports are used internally and should be firewalled in production usage:

  • 8001 - provides Kong’s Admin API that you can use to operate Kong. See admin_api_listen.
  • 8444 - provides Kong’s Admin API over HTTPS. See admin_api_ssl_listen.

You can use the Admin API to configure Kong, create new users, enable or disable plugins, and a handful of other operations. Since you will be using this RESTful API to operate Kong, it is also extremely easy to integrate Kong with existing systems.

Kong datastore

Kong uses an external datastore to store its configuration such as registered APIs, Consumers and Plugins. Plugins themselves can store every bit of information they need to be persisted, for example rate-limiting data or Consumer credentials.

Kong maintains a cache of this data so that there is no need for a database roundtrip while proxying requests, which would critically impact performance. This cache is invalidated by the inter-node communication when calls to the Admin API are made. As such, it is discouraged to manipulate Kong’s datastore directly, since your nodes cache won’t be properly invalidated.

This architecture allows Kong to scale horizontally by simply adding new nodes that will connect to the same datastore and maintain their own cache.

Which datastores are supported?

Starting with the 2.7 release, using Cassandra as a configuration datastore for the Kong Gateway will be considered deprecated  Learn More

Apache Cassandra

Apache Cassandra (http://cassandra.apache.org/) is a popular, solid and reliable datastore used at major companies like Netflix and Facebook. It excels at securely storing data in both single-datacenter or multi-datacenter setups by providing good performance and a fail-tolerant architecture.

Kong can use Cassandra as its primary datastore if you are aiming at a distributed, high-availability Kong setup.  It is reasonably easy to configure a multi-region infrastructure with a Cassandra datastore.

Cassandra performs best on machines with a generous amount of CPU and Memory, like AWS m4.xlarge instances. Please review the following white papers to assist with sizing and configuring Cassandra:

Note: If you don’t want to manage/scale your own Cassandra cluster, consider using a Cassandra managed service from Instaclustr or other service providers.

PostgreSQL

PostgreSQL is an established SQL database for use with Kong.

It is a good candidate for single instance or centralized setups due to its relative simplicity and strong performance. Many cloud providers can host and scale PostgreSQL instances, most notably Amazon RDS.

Whether using Cassandra or PostgreSQL, Kong maintains its own cache. As a result, Kong gateway and plugin performance is sub-millisecond for most use-cases.

How does it scale?

When it comes to scaling Kong, you need to keep in mind that you will mostly need to scale Kong’s server and eventually ensure its datastore is not a single point of failure in your infrastructure.

Kong Server

Scaling the Kong Server up or down is fairly easy. Each server is stateless meaning you can add or remove as many nodes under the load balancer as you want as long as they point to the same datastore.

Be aware that terminating a node might interrupt any ongoing HTTP requests on that server, so you want to make sure that before terminating the node, all HTTP requests have been processed.

Kong datastore

Scaling the datastore should not be your main concern, mostly because as mentioned before, Kong maintains its own cache, so expect your datastore’s traffic to be relatively quiet.

However, keep in mind that it is always a good practice to ensure your infrastructure does not contain single points of failure (SPOF). As such, closely monitor your datastore, and ensure replication of your data.

If you use Cassandra, one of its main advantages is its easy-to-use replication capabilities due to its distributed nature. Make sure to read the documentation pointed out by the Cassandra section of this FAQ.

What are plugins?

Plugins are one of the most important features of Kong. Many Kong API gateway features are provided by plugins. Authentication, rate-limiting, transformation, logging and more are all implemented independently as plugins. Plugins can be installed and configured via the Admin API running alongside Kong.

Almost all plugins can be customized not only to target a specific proxied service, but also to target specific Consumers.

From a technical perspective, a plugin is Lua code that’s being executed during the life-cycle of a proxied request and response. Through plugins, Kong can be extended to fit any custom need or integration challenge. For example, if you need to integrate the API’s user authentication with a third-party enterprise security system, that would be implemented in a dedicated plugin that is run on every request targeting that given API.

Please check out the broad set of ready-to-deploy plugins on the Kong Plugin Hub. Learn how to enable plugins with the plugin configuration API.

Additional community-developed plugins can be found on Github. Finally, we invite developers to use the Plugin Development Guide to create new plugins that extend the capabilities of the Kong platform.

How is the Kong API Gateway different from legacy API management solutions?

The Kong microservice API gateway employs a fundamentally different architecture, reduces communication latency and offers a broader range of services, making it quite different from legacy API management solutions.

Legacy API Management solutions arose in an earlier era of application development and solved a very important problem of that time: how can developers provide an externally consumable API for 3rd party access to established monolithic applications. Think about early mobile applications add-ons and partner-accessible APIs for read access to mature mission-critical applications. Because applications at that time were primarily monolithic in design, the internal communication between features was handled through internal function calls rather than external APIs.

Since the invention of legacy API management solutions, application development and deployment patterns have changed dramatically. Innovations such as RESTful APIs, microservices, containers, cloud computing and distributed systems are now commonplace in modern application development. Embracing these innovations requires that developers have a low-latency way to power internal as well as external communications. This need resulted in the evolution of the modern microservice API gateway and Kong.

API gateways have emerged as an essential component in modern application development to enable high volume and low latency internal and external communications. What’s more, the gateway provides additional services such as authentication, rate limiting, caching and logging for both internal and external consumers. Enabling these services in the gateway means that developers don’t need to repeatedly code them in each microservice, ultimately making teams more productive.

API Management systems are still effective at serving external API communications. API Gateways like Kong, however, enable developers to use a single technology for internal and external API processing because they provide sub-millisecond latency along with a superset of API management functionality.

What are plugins?

API Gateways facilitate API communications between a client and an application, and across microservices within an application.  Operating at layer 7 (HTTP), an API gateway provides both internal and external communication services, along with value-added services such as authentication, rate limiting, transformations, logging and more. 

Service Mesh is an emerging technology focused on routing internal communications. Operating primarily at layer 4 (TCP), a service mesh provides internal communication along with health checks, circuit breakers and other services. 

Because API Gateways and Service Meshes operate at different layers in the network stack, each technology has different strengths. 

At Kong, we are focused on both API Gateway and Service Mesh solutions. We believe that developers should have a unified and trusted interface to address the full range of internal and external communications, and value added services. Today, however, API Gateways and Service Mesh appear as distinct solutions requiring different architectural and implementation choices. Very soon, that will change. 

What is a REST API?

Kong fully supports REST APIs, the most commonly deployed API standard for web and SaaS applications accessed from browser, mobile and IoT clients.

REST stands for Representational State Transfer. It relies on a stateless, client-server, cacheable communications protocol, commonly HTTP or HTTPS.

RESTful APIs allow “consumers” to progress through an application by selecting links (resources), such as /product/vase/, and specific HTTP operations (methods) such as GET, DELETE, POST or PATCH, resulting in the next resource (representing the next state of the application) being transferred to the consumer for its use. “Learn REST” is one of many helpful tutorials for learning more about REST APIs.

What’s more, administrators use the RESTful Kong Admin API to deploy and manage Kong, add APIs, configure consumers and more.

How many microservices/APIs can I add on Kong?

You can add as many microservices or APIs as you like, and use Kong to process all of them. Kong currently supports RESTful services that run over HTTP or HTTPs. Learn how to add a new service on Kong.

You can scale Kong horizontally if you are processing lots of requests, just by adding more Kong servers to your cluster.

How can I add authentication to a microservice/API?

To add an authentication layer on top of a service you can choose between the authentication plugins currently available in the Plugins Hub, like the Basic Authentication, Key Authentication, OAuth 2.0 and OpenID Connect plugins.

Can I use Kong in public clouds? How about on-premises?

Yes, Kong supports single-vendor, multi-vendor and distributed setups in the cloud and on-premises.

Kong is one of the very few API gateways that is fully platform agnostic. This means you can move applications effortlessly between a private data center and a public cloud. You can even move it from one public cloud to another, or configure a global hybrid environment across any number of datacenter and cloud environments.

Kong frees you from cloud vendor lock-in and puts you in control of your computing environment.

How can I migrate to Kong from another API Gateway?

In case you are already using an existing API Gateway and want to migrate to Kong, you will need to take two steps in consideration:

1) Migrate the data. Kong offers a RESTful API that you can use to migrate data from an existing API Gateway into Kong. Some API Gateways allow to export your data in either JSON or CSV files, among other methods. You will need to write a script that reads the exported data and then triggers the appropriate requests to Kong to provision APIs, Consumers and Plugins.

2) Migrate the network settings. Once the data has been migrated and Kong has been configured, you will need to check in a staging environment that everything works well. Once you are ready to switch your users into Production over Kong, you will then need to adjust your network settings to point to your Kong cluster (most likely by updating the DNS configuration).

If you are a Kong Enterprise customer, we can help with the migration.

Is Kong compatible with my stack and specifically with [insert product name]?

Probably, yes. Kong and Kong plugins comply with industry standards including HTTP and JSON. We test Kong extensively with NGINX web and proxy servers, PostgreSQL and Cassandra datastores, Linux and container operating environments, and APIs based on microservice design patterns. Because Kong operates at the application level and adheres to industry standards, it is broadly compatible with all leading web technologies and orchestration, log management, continuous deployment, and microservice development tools.

How can I migrate to Kong from another API Gateway?

If you need answers beyond this Kong FAQ, please turn to the official documentation or ask any questions to the community and the core maintainers at Kong Nation. Kong Nation is a great place to research API gateway topics, post questions, and discuss all things Kong. It’s easy to navigate for answers and post new questions (registration is required for posting).

If you are considering Kong Enterprise, the commercial version with advanced functionality and backed by Kong Inc.’s Customer Success team, please request a meeting with Kong API experts in our sales department.

Kong Enterprise  customers have access to Kong Inc’s Customer Success team for timely answers from Kong experts.