With Kong 1.0 users are now able to control TCP (Transport Control Protocol) traffic. Learn about how we added TCP support, and how you can try it out.
TCP traffic powers email, file transfer, ssh, and many other common types of traffic that can't be handled by a layer 7 proxy. Our expansion to layer 4 will enable you to connect even more of your services using Kong.
Why now?
When we were designing the ability to deploy Kong as a service mesh, we wanted to build a system that could connect all our users' services, running on any infrastructure, written in any language, and architected in any pattern. Part of fulfilling this promise meant moving down the Open Systems Interconnection (OSI) stack to cover services that communicate using protocols other than HTTP. We wanted Kong to be able to handle all types of TCP traffic when deployed either as an API gateway or in a mesh pattern. With our sponsorship of OpenResty's stream support, we were able to add user-facing support for TCP traffic to Kong.
How does it work?
Our new stream_listen configuration option allows users to select the IPs and ports where Kong's stream mode should listen for TCP traffic. Kong automatically terminates Transport Layer Security (TLS) for incoming TLS traffic, and depending on service configuration you can have Kong encrypt outbound connections with TLS or not. Using Kong's Server Name Indication (SNI) and certificate entities, users can now also configure their own TLS certificates. One of the major use cases for Kong's TCP support is TLS termination.
Kong's extensibility with plugins is a big reason that users choose Kong over other API Gateways or service meshes. TCP traffic handling is still in its early days, and Kong 1.0 didn't ship with any TCP-supporting plugins. But, users are already able to write their own custom plugins that apply to TCP traffic. Writing a TCP plugin is a little different to a traditional Kong HTTP plugin. Instead of "rewrite", "access", "header" and "body" phases, TCP plugins will have a "preread" phase.
What are the Gotchas?
TCP support requires some customizations that Kong has made for OpenResty. If you're compiling your own OpenResty from source, apply Kong's openresty-patches to be able to use this new functionality. Kongs packages and images already come with these patches applied.
Another thing to watch out for is that you should *not* try to use the Nginx SSL listener directive for stream ports. Kong decides to terminate TLS via it's own mechanisms instead.
How do I try it?
Start your kong with the stream_listen configuration option, selecting the port you want to listen on. You may choose to do this with either the config file or via environment variables.
Configure a service with either a ‘tcp' or ‘tls' protocol field. If you select ‘tcp', then traffic will be sent to your upstream as plain traffic. If you select ‘tls' then kong will encrypt the outgoing traffic with TLS. This should be familiar to users of Kong where they had the existing choice of ‘http' vs ‘https' for the protocol field.
Configure a route for your service based on ‘sources' ‘destinations' and/or ‘snis'.
The following is a runnable example using Docker to terminate TLS traffic before sending it to tcpbin.
# Start up your normal Postgres database and run kong migrations
# See https://docs.konghq.com/install/docker/ for more informationdocker network create kong-net
docker run -d --name kong-database \
--network=kong-net \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
postgres:11docker run --rm \
--network=kong-net \
-e "KONG_PG_HOST=kong-database" \
kong:1.0.3-alpine kong migrations bootstrap
# Start kong with stream_listen
docker run \
--network=kong-net \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001" -p 8001:8001 \
-e "KONG_STREAM_LISTEN=0.0.0.0:5555" -p 5555:5555 \
kong:1.0.3-alpine
# Add a service and route
curl localhost:8001/services -d name=tcpbin-echo -d url=tcp://52.20.16.20:30000curl localhost:8001/services/tcpbin-echo/routes -d protocols=tls -d snis=tlsbin-example
# Try out the service (it'll do a TLS handshake then echo your input)
openssl s_client -connect localhost:5555 -servername tlsbin-example
What's next for TCP support?
Here are just a few of the improvements to TCP support that you can look forward to in future versions of Kong.
At the moment Kong will unconditionally try to terminate TLS if the traffic looks like a valid TLS ClientHello. We want to make this configurable on a per-route basis, which will include the ability to terminate or not based on SNI.
The next Kong release will include support for custom Nginx directives for the stream module. You can check out that work in the PR here.
In the future, the Kong Plugin Development Kit (PDK) will include more support for TCP data. Currently, to write many types of TCP plugins you need to delve into sparsely documented internal structures. We will slowly be exposing more fields and making it easier to write your own Kong TCP plugins. We'll also be updating any appropriate Kong-supported plugins to work with TCP.
If you're a plugin maintainer and want to add TCP support, or if you have any questions about TCP support in Kong, please get in touch with us through our community forum, Kong Nation.
Architecture Overview
A multicloud DCGW architecture typically contains three main layers.
1\. Konnect Control Plane
The SaaS control plane manages configuration, plugins, and policies. All gateways connect securely to this layer.
2\. Dedicated C
Hugo Guerrero
Building Secure AI Agents with Kong's MCP Proxy and Volcano SDK
The example below shows how an AI agent can be built using Volcano SDK with minimal code, while still interacting with backend services in a controlled way. The agent is created by first configuring an LLM, then defining an MCP (Model Context Prot
Eugene Tan
Kong Simplifies Multicloud Cloud Gateways with Managed Redis Cache
Managed Redis cache is a turnkey "Shared State" add-on for Kong Dedicated Cloud Gateways. It is designed to combine the performance of an in-memory data store with the simplicity of a SaaS product. When you spin up a Dedicated Cloud Gateway in Kong
Amit Shah
AI Input vs. Output: Why Token Direction Matters for AI Cost Management
The Shifting Economic Landscape: The AI token economy in 2026 is evolving, and enterprise leaders must distinguish between low-cost input tokens and high-premium output tokens to maintain profitability. Agentic AI Financial Risks: The transition t
Dan Temkin
Metered Billing for APIs: Architecture, Telemetry, and Real-World Patterns
Imagine 47 million requests hitting your platform last month. Can you prove who made each one—and invoice with confidence? If that question tightens your stomach, you're not alone. Metered billing for APIs promises fair, transparent pricing that s
Kong
Exposing Kafka to the Internet: Solving External Access
Your Kafka Doesn't Have to Live Behind a Wall
When teams resort to VPC peering or PrivateLink to expose Kafka, they're not solving the problem — they're managing it, one network topology decision at a time. Every new external consumer adds compl
With Kong Ingress Controller, when your Control Plane was hosted in Kong Konnect, and you were using Kubernetes Gateway API, your dataplane, routes, and services were in read-only mode. When using Kong Ingress Controller with Kubernetes Gateway API
Justin Davies
Ready to see Kong in action?
Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.