What is an API Gateway?
An API Gateway is a middleware layer that sits between API-based applications and computing clients in order to orchestrate common functionality and scalable, distributed architectures.
Kong API Gateway Best Practices
For rapid deployment of the Kong API Gateway, follow the five-minute quickstart guide. This will take you through the basics of starting Kong, checking that it has started successfully, stopping it and/or reloading it without downtime.
The Quickstart guide requires Kong to be installed and your database connection settings to be correctly configured.
If everything goes well, within just a few minutes, you can have the Kong API Gateway up and running, giving you access to the RESTful Admin interface to manage Consumers, Routes and Services.
Migrating Data from other API Gateways
Migrating to the Kong API Gateway is relatively easy, and just requires you to migrate your data and your network settings respectively.
Depending on your existing API Gateway, you may need to write a simple script to convert exported JSON or CSV data into requests that trigger Kong to provision the appropriate APIs, Consumers and Plugins.
Data migration is made easier thanks to Kong’s RESTful API to migrate existing data into Kong, and you can check that everything has transferred across correctly in a staging environment before you update your network settings to point to your Kong server or cluster.
Clustering is a powerful feature of Kong’s scalability, giving you the ability to add more machines to handle incoming requests — effectively scaling your API Gateway horizontally.
In order to enable multiple nodes to work together as a single API Gateway, you must make sure that they belong to the same cluster.
Clustering allows the different nodes to share the same datastore and can be combined with load balancing to distribute traffic equally across all of your nodes.
Kong supports multiple types of health checks to identify unhealthy targets on individual Kong nodes.
- Active Checks: Periodically request a specific HTTP or HTTPS endpoint and mark it as healthy or unhealthy based on its response.
- Passive Checks: Analyze proxied traffic on an ongoing basis to determine the health of targets. This method is also known as a circuit breaker.
By actively and passively monitoring the health of targets, you can take remedial action when needed to restore functionality and ensure all nodes in a Kong cluster have access to the endpoints they require.
Kong allows load balancing using several different methods:
- A Records: Using an A record containing multiple IP addresses, all entries will be treated equally in a round robin.
- DNS-based: DNS-based load balancing allows backend service registration to occur outside of Kong with periodic updates from the DNS server.
- SRV Records: SRV records can contain IP addresses, port information and weighting, allowing multiple instances of a service to run via different ports on the same IP address.
The last method is particularly useful as the weighting allows the load balancer to adjust individual services according to their weighting, rather than treating them all equally.
Secure Admin API
Kong’s Admin API gives full control of your Kong installation and should be secured against access by unauthorized individuals.
There are many ways to do this, including network layer access restrictions, specified IP ranges for external access, and fine-grained access control by using Kong itself as a proxy for access to its own Admin API.
Enterprise users also benefit from role-based access control which can use user roles and permissions to grant access to the Admin API, with excellent scalability for case-specific and complex uses.
What issues might I encounter?
Kong is designed to be agile, flexible and massively scalable, and with some sensible preparedness, it should be possible to avoid encountering any significant issues even in complex enterprise deployments.
Here are some issues to keep in mind, especially when scaling Kong across multiple nodes or in a way that significantly alters the size of the Kong datastore.
Scaling Kong Server
You can scale Kong Server upwards by adding new nodes as required. Remember the above best practice on clustering when doing this.
New nodes must point to the same Kong datastore in order to interoperate, and you should ensure that new nodes are also subject to load balancing to ensure good performance.
Scaling Kong datastore
Kong datastore traffic is typically quiet because Kong maintains its own cache; however, it is important not to allow the datastore to become a single point of failure for your organization.
This can be prevented simply through close monitoring of the datastore and by keeping an up-to-date backup in case of emergencies.
The Kong API Gateway is fully platform agnostic, which means you are free to use public cloud environments and/or private datacenter servers as you wish.
Moving applications is easy between different cloud platforms or physical servers, and Kong can operate in a hybrid environment that combines the two into a single configuration, so there should be no issues when moving applications to any type of platform.
Where to find help
If you encounter any issues that you don’t know how to resolve, Kong Nation is the place to go.
This dynamic forum is where Kong users from all over the world come together to share tips and tricks, best practice guidance, and to help each other resolve any issues as they arise.
It’s also where Kong Inc. and other community members can make announcements, so you are always in the loop when new features are implemented, or other important news is announced.
Want to learn more?
Request a demo to talk to our experts to answer your questions and explore your needs.