Siemens Bridges Kafka Event Streams to APIs with Kong
Leading technology company uses Kong to centralize security and enable near-real-time access for legacy and modern systems.
saved per Lambda development
faster data distribution

Siemens is a leading technology company – focused on industry, infrastructure, transport, and healthcare – that transforms the everyday of billions of people.
Enabling organization-wide access to Confluent Kafka event streams
In 2019, Siemens implemented a stream and event-first paradigm with Confluent Kafka. This allows them to distribute technical and commercial product metadata. But with millions of highly configurable products, they faced a significant challenge in getting the data to all of their systems in a reasonable amount of time.
At API Summit 2025, Sven Legl, Senior IT Key Expert, shared how he and the Digital Industries team created a bridge between these event streams and APIs to reduce redundant data storage, eliminate costs, and enable near real-time data distribution.

Sven Legl presents at API Summit 2025 in New York
Data difficulties — redundant storage and firehose distribution
While the implementation of a Confluent Kafka-based event streaming solution solved Siemens’ challenges around decreasing the time it took to distribute information to systems, it created other challenges. First, many of the systems used in the organization weren’t able to consume Kafka messages directly. These legacy systems still relied on synchronous data access.
"If we have a system that only needs specific product information, that's quite a pain because they always get all the data, like on a firehose," Legl said.
Another challenge was redundant data storage.
"All the tools that needed access to product information were in charge of setting up their own storage, storing all that product master data, and then picking from that the data they actually needed," Legl said. "That raised costs because of the high overhead to cover for data storage."
The team needed a way to bridge streaming to APIs to allow for one persistent data storage location, with an asynchronous way for applications to have fine-grained access to just the data they need.
“We have a highly distributed landscape within Siemens… With this system, we are creating a governance layer where we can make sure that security is handled accordingly.”
Bridging API-first and event-first
Legl had a clear goal: to introduce a governance layer that would allow for better control and distribution of the Kafka messages. In order to do this, he and the team at Siemens built a three-layer system using AWS Lambda that was able to consume Kafka events, store in MongoDB, and expose via Kong API Gateway.
As the below architectural diagram shows, the Lambda service consumes the data, then all the events are pushed and written into the MogoDB document database. This allows fine-grained access to data and for legacy systems to access data located in Kafka via APIs. It's also created a more consistent approach to security.
“I'm quite happy to have a Lambda plugin available with Kong, because it really enables us to shift security responsibilities away from developers and integrate it directly in the API gateway,” Legl said.
The system also ties into Siemens’ existing APIOps pipeline. All of this connects with their full observability stack for unified logging, monitoring, tracing, and alerting across the entire platform.

"We discovered that this new approach saved at least a week for developing each Lambda function because we have shifted security completely into Kong."
Enabling smart data access with reduced risk and cost
Legl and the Siemens team have seen many benefits since launching their new governance layer, from security improvements to cost savings.
Security and isolation: Because requests now pass solely through Kong, there is a reduction of attack services and no exposure of the Lambdas to the internet.
Cost and efficiency: Since reducing complexity and unifying into one governance layer, the team has been able to directly invoke without AWS API Gateway. This saves both one-time costs and also operational costs in the longer term.
Fine-grained authorization: The Siemens team has implemented multiple points of authorization that are tailored to the different groups that access the data shared via the event stream – RBAC for consumers and checking JWT Claims. This leads to clean, auditable access control.
Vendor agnosticism: Siemens as a whole has a multi-cloud architecture and needed a solution that would support this. Using Kong, the API stays consistent through any upstream changes.
By successfully bridging their Confluent Kafka event streams with APIs using Kong Gateway, the Siemens team created a crucial governance layer that centralized security, enabled fine-grained, near-real-time data access for all systems — modern and legacy — and significantly reduced redundant data storage and operational costs. This strategic move saved weeks of development time and established a consistent, auditable, and vendor-agnostic foundation for their smart data distribution, demonstrating Kong's role in accelerating and securing complex, event-driven architectures.