Today, we are excited to announce the release of Kong 1.3! Our engineering team and awesome community has contributed numerous features and improvements to this release. Based on the success of the 1.2 release, Kong 1.3 is the first version of Kong that natively supports gRPC proxying, upstream mutual TLS authentication, along with a bunch of new features and performance improvements.
Read on below to understand more about Kong 1.3’s new features, improvements and fixes, and how you can take advantage of those exciting changes. Please also take a few minutes to read our Changelog as well as the Upgrade Path for more details.
Native gRPC Proxying
We have observed increasing numbers of users shifting towards Microservices architecture and heard users expressing their interests in native gRPC proxying support. Kong 1.3 answers this by supporting gRPC proxying natively, bringing more control and visibility to a gRPC enabled infrastructure.
- Streamline your operational flow.
- Add A/B testing, automatic retry and circuit breaking to your gRPC services for better reliability and uptime.
- More observability
- Logging, analytics or Prometheus integration for gRPC services? Kong's got you covered.
- New protocol: The Route and Service entity’s
protocol attribute can now be set to
grpcs, which corresponds to gRPC over clear text HTTP/2 (h2c) and gRPC over TLS HTTP/2 (h2).
Upstream Mutual TLS Authentication
Kong has long supported TLS connection to the upstream services. In 1.3, we added the support for Kong to present a specific certificate while handshaking with upstream for increased security.
- Being able to handshake with upstream services using certificate makes Kong even better at industries that require strong authentication guarantees, such as financial and health care services.
- Better security
- By presenting a trusted certificate, the upstream service will know for sure that the incoming request was forwarded by Kong, not malicious clients.
- Easier compliance
- More developer friendly
- You can use Kong to transform a Service that requires mutual TLS authentication to methods that are more developer agnostic (for example, OAuth).
- New configuration attribute: The
Service entity has a new field
client_certificate. If set, the corresponding
Certificate will be used when Kong attempts to handshake with the service.
The Sessions Plugin
In Kong 1.3, we have open sourced the Sessions Plugin (previously only available in Kong Enterprise) for all users to use. Combined with other authentication plugins, it allows Kong to remember browser users that have previously authenticated. You can read the detailed documentations here.
NGINX CVE Fixes
Kong 1.3 ships with fixes to the NGINX HTTP/2 module (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516). We also released Kong 1.0.4, 1.1.3, 1.2.2 to patch the vulnerabilities in older versions of Kong in case upgrade to 1.3 can not happen immediately.
OpenResty Version Bump
The version of OpenResty has been bumped to the latest OpenResty release – 22.214.171.124 which is based on Nginx 1.15.8. This release of OpenResty brought better behavior while closing upstream keepalive connections, ARM64 architecture support and LuaJIT GC64 mode. The most noticeable change is that Kong now runs ~10% faster in the baseline proxy benchmarks with key authentication thanks to the LuaJIT compiler generating more native code and OpenResty storing request context data more efficiently.
Additional New Features in Kong 1.3
- Kong’s router now has the ability to match Routes by any request header (not only
- This allows granular control over how incoming traffic are routed between services.
- See documentation here.
- Kong can now send traffic to upstream services that have the least amount of connections. Improving upstream load distribution in certain use cases.
- See documentation here.
- The newly added
kong config db_export CLI command can be used for creating a dump of the database content into a YAML file that is suitable for declarative config or importing back to the database later.
- This allows easier creation of declarative config files.
- This makes backup and version controlling of Kong configurations much easier.
- See documentation here.
Proactively closing upstream keepalive connections
- In older version of Kong, upstream connections are never closed by Kong. This can lead to race conditions as Kong may try to reuse a keepalived connection while the upstream attempts to close it.
- If you have seen an “upstream prematurely closed connection” error in your Kong
error.log, this release should significantly reduce or even eliminate this error in your deployment.
- New configuration directives have been added to control this behavior, read the full Changelog to learn more.
More listening flags support
- Especially the
reuseport flag which can be used to improve load distribution and latency jitter if the number of Kong workers are large.
bind flag support has also been added. You can check NGINX listen directive documentation to understand the effect of using them.
Other Improvements and Bug Fixes
Kong 1.3 also contains improvements regarding new entities for storing CA Certificates (certificates without a private key), Admin API interface and more PDK functions. We also fixed a lot of bugs along the way. Because of the amount of new features in this release, we can not go over all of them in this blog post and instead encourage you to read the full Changelog here.
We also added a new section to the
kong.conf template to better explain the capabilities of injected NGINX directives. For users who have customized templates for adding just a few NGINX directives, we recommend switching over to use the injected NGINX directives instead for better upgradability.
As always, the documentation for Kong 1.3 is available here. Additionally, as mentioned above, we will be discussing the key features in 1.3 in subsequent posts and on community calls, so stay tuned!
Thank you to our community of users, contributors, and core maintainers for your continuing support of Kong's open source platform.
Please give Kong 1.3 a try, and be sure to let us know what you think!
As usual, feel free to ask any question on Kong Nation, our Community forum. Learning from your feedback will allow us to better understand the mission-critical use-cases and keep improving Kong.