By on June 4, 2015

KONG Architecture Choices: The API Management Layer for Microservices

KONG, the king of open-source API management platforms, is in my (totally not biased) opinion an extremely cool tool.

From startups to enterprises, companies have tons of APIs (growth of APIs within Mashape) and they need to be managed in a simple and effective way.

Instead of building functionalities into each microservice KONG deploys a solution for managing them based on your needs – focus on excelling at microservices instead of the boring stuff.

Microservices just like sushi

When I was explaining microservices to my cousin, and why they’re cool, I told him to imagine microservices like awesome pieces of sushi: Each sushi has its own taste, texture, color and function in my dinner plan.  If I’m hungry I just order more sushi.  In the same way, microservices work elastically together to power applications.



Scaling is easy: in the same way a chef can place sushi on different plates, depending on their type, microservices can be deployed on whatever type of server they require, independently of one another.

The beauty of well built microservices is that they’re true components, like sushi or lego bricks: they are designed to do one thing very well while playing nicely with others.

A different approach – shifting the pyramid of responsibility 

Unlike monolithic services, developers that work in microservices-driven companies usually work in much smaller teams.
In fact, Amazon introduced a famous rule that goes something like this:

“A team of developers working on microservices should be fed and satisfied by only two large pizzas”.

All you need to feed your microservices team

Two pretty large pizzas is all you need

If you add a lot of developers to a team creating a microservice you’ll need more than two pizzas.  That’s bad!  Keep your team small and save the pizza budget for another day.  A microservices-oriented team sizing technique introduces an interesting approach to collaboration, and helps remove internal team friction.

Being in a team that works on a microservice is great because you have control over the entire software development cycle.  For example, when working on a microservice and writing new code you don’t need to redeploy the whole app when you go live; instead only one part is re-deployed, your microservice.

This approach can dramatically decrease the risk of failure and downtime of the app; especially if you encounter unforeseen issues during QA or similar, and thus require you to restore and redeploy quickly.  If you must redeploy the work of other teams can continue without downtime.  Asynchronously.

It gets better!  When building microservices it doesn’t matter what each component is built with: it could be PythonRuby, Java, Go or…you name it!

Each team selects a technology stack, plus a language of choice and is responsible for the correct functionality of their own microservice-project.  Also, the team decides when to push changes and how to design it to scale, probably using a container system like docker.

The portability and flexibility of this approach minimizes the chance of teams writing in different languages and creating messy-looking microservices.  Furthermore if down the line you realize one of your design approaches did not go as planned, you simply change the microservice and swap it out, in a plug-and-play fashion.

Choosing the right data-store for KONG

Microservices by design should not store data inside the container.  Instead, creating data should happen on an external volume mapped to the container; therefore enabling your data storage to scale as needed.

Every microservice has its own database decoupled from other services.  If needed, consistency among databases is maintained using either database replication mechanisms or application-level events.

This sounds amazing right?  Well it is.

At Mashape we selected Cassandra, a great distributed data-store. Cassandra is an open source database that remains consistent no matter how distributed it becomes.  Making KONG available to the public and open source was a big step for us.  From the beginning, we had to think about technological solutions that could scale with our services.

With Cassandra we got a high-performance and scalable datastore that provides high-level redundancy and granular management of data replication across instances, something which is of utmost importance in the API management world.

Ideally when running KONG on your servers you would have Cassandra inside a container and storing data in a mounted volume.

A RESTFul interface to glue microservices

We’re all about APIs, so we couldn’t resist having a RESTful interface allowing you to set-up and manipulate how KONG manages your layer of underlying APIs.  This also strongly ties in with the modularity of KONG.  In fact KONG is built to be extremely flexible, loading plugins from the command line in a RESTful manner for individual APIs.  Some APIs might require more plugins than others.

The whole point of KONG is to remove the complexity of handling things like caching, authentication, and rate-limiting just to mention a few.  Therefore it only makes sense that you can customize which plugins will be enabled on which API or, going forward, endpoints.

Extensible by design

If you’re building for web or for other scenes, KONG will provide common functionalities for all your microservices, and if you need something more you’ll be able to create it yourself.

Because of its extensible nature, and ability to turn plugins on and off on the fly for different APIs, KONG is particularly interesting as a solution for companies with a lot of “active” microservices.

KONG is written in Lua over OpenResty (nginx) and has been designed to act as a gateway for HTTP requests while providing logging, authentication, rate-limiting and much much more thorough plugins.  As of today, all plugins are written in Lua.

Implementing KONG in your infrastructure

KONG design

Spread across the world on your cloud, KONG connects to a cluster of Cassandra.  KONG sits behind your load balancer; which is the preferred method to scale KONG at large.  You can setup KONG in a round-robin fashion or with weighted mode, depending on resources, it’s completely up to you.  In this example we used NGINX with a round-robin setup.

For best performance place your microservices in the same geographical region as  KONG.  Each instance of KONG is setup in a way to serve a pool of microservices.  All setup operations are done through the RESTful API so it’s easy to configure KONG on the fly with a simple set of scripts.

In conclusion

Microservices connect together without an ESB (Enterprise Service Bus) – the goodness is built inside a microservice, whilst the data lives outside so that you can scale them independently.  You need to put KONG in front of your APIs / Microservices and behind a load balancer if you have multiple instances of our API layer: don’t forget that KONG instances need to point to the same Cassandra cluster!

If you’re interested in knowing more about why we picked Cassandra please let us know and we’ll write a follow up.

Would you like to write a Lua plugin?  Get in touch by opening a github issue over here!

Get Kong!

Image sources: flickr.com

Share Post