Introducing Kong Support for Service Mesh Deployments
Earlier this month, I shared some thoughts about service mesh as a deployment pattern versus a new technology and why traditional API management solutions can’t keep up with service mesh patterns. I’m excited to announce today that our Kong platform will support service mesh deployments. Users will be able to use Kong as a standalone service mesh or to integrate it with Istio and other service mesh players.
We designed our platform to be lightweight, flexible and deployment-agnostic, allowing users to easily manage increased East-West network traffic and latency within modern, microservice-oriented architectures. Where traditional API management platforms may introduce about 200 milliseconds of processing latency between services in a container ecosystem, we create less than 10 milliseconds of delay. Our plugin architecture also offers a lot of flexibility, enhancing latency performance by removing unnecessary functionality and supporting seamless integrations with ecosystem participants.
While older, traditional API solutions can’t keep up, we enable developers, DevOps pros and solutions architects to succeed in any architecture — old and new.
To learn more about using Kong for service mesh deployments, join me at the Kong Summit on September 18-19 in San Francisco, along with other technologists! We’ll cover more in depth Kong’s service mesh capabilities and discuss the future of service mesh.