Resources
  • eBooks
  • Reports
  • Demos
  • Videos
  • Value Calculator
  1. Home
  2. Customer Stories
  3. Siemens' Next-Gen API Management: APIOps in Federated Architectures

Siemens' Next-Gen API Management: APIOps in Federated Architectures

Company

www.siemens.com/
Industry
  • Manufacturing
Customer Since2022

Join us for an insightful session from API Summit 2024 on Siemens' approach to next-gen API management. Daniel Matos, Automation DevOps Engineer, and Sven Legl, IT Architect and Innovations Manager at Siemens, discuss their transition from monolithic systems to microservices, current API management processes, and their federated API management approach, including GitOps and Kubernetes for scalability. Discover the benefits and future outlook of APIOps in federated architectures.

Table of Contents

  • Setting the stage for next-gen API management
  • Breaking down monolithic systems
  • Our API management journey
  • Architecture
  • How to scale in federated architectures
  • Platform automation deep-dive
  • APIs - Finding the right balance
  • API Ops - business benefits

Start Your Success Story

Get a Demo
Introduction

Setting the stage for next-gen API management

Sven: Today, we're excited to give you a bit of insight into our Siemens approach of next-gen API management, which means we'll guide you on the journey we took for technical insights, into API ops and federated architectures, and how we established an API management process with  full automation behind. We'll really dive into a federated API management approach where we are leveraging GitOps and Kubernetes to scale up quite high.

We will then show the benefits and the future outlook of our usage of API management and Kong so stay tuned. Let us start with a short introduction. I have my colleague, Daniel, with me today.

I'm Sven Legl. I'm an IT architect and an innovation manager within Siemens AG, working in Digital Industries, a part of Siemens. My main topics are innovation and using technology with purpose and API management, which means we introduced a greenfield approach, Kong as the API management solution within digital industries, where we introduced it to our customers and our businesses.

Daniel: I'm Daniel Matos. I've been working for Siemens Portugal for two years, also in the Digital Industries. I'm working as a DevOps engineer, mostly related to automation topics, working with GitLab, Kubernetes, and, of course, Kong here with my colleague Sven.

Sven: Let's dive in. We're both from Siemens AG, employed at the division of Digital Industries. Our purpose is more than just a statement. It is a commitment of Siemens and DI, which encapsulates our mission to impact the world. And that's with the slogan, “We create technology to transform the everyday for everyone.”

That means that we believe in the power of technology to improve lives. Doesn't matter where you're going, Siemens is usually a big part of it. Doesn't matter if it’s in public transportation, the industry sector, in health care, everywhere. Siemens products, you can see.

For DI, where Daniel and I are employed, I just want to sketch that picture. Digital Industries has about 77,000 employees overall. And what you can see on the screen here is the manufacturing of the Porsche Taycan, and the Siemens products are used to guide them through the full manufacturing process and to automatically put the screws in. So that is impressive to see what can be done automatically.

I'm now going to do an introduction of where we were at Siemens DI, why we decided on the need for an API management platform initially, and then guide you through our current status where we are today.


Challenge

Breaking down monolithic systems

Sven: Some of you might already know that illustration from Kong. That's exactly where we also started our journey with basically monolithic systems. We really had big monolithic systems and with those monolithic systems, we lost time to market. We needed to find a new structure, and that structure was slicing that monolithic service into different services and then to microservices. Because we are convinced that connectivity in the future is the backbone of our distributed systems and we all need to improve time to market - that's why we decided to really slice it down.

And that's why we started our journey to decouple monolithic systems and to get them into smaller pieces. Currently, we're really developing more and more microservices so we're in the middle of the picture.

With that, we try to enable not only north to south traffic. We also try to now enable west to east traffic. And with getting more and more microservices in our environment, it's getting more urgent to keep an overview about the APIs, to interconnect them, and also to centralize specific processes. That's where we are now with our microservices architectures and our enterprise architecture within Siemens AG, specifically within DI.

That means when we're developing software applications, we need to do that rapidly, efficiently, of course, but we also need to keep an overview of those afterwards. That's where API management, the developer portal, and all that comes into place. With having more and more microservices, we also need to ensure that we do not integrate all of those services manually.

That is a big topic. We will guide you today through our processes and our current strategy in terms of API ops.


Solution

Our API management journey

Sven: Where we are today is I want to show you our API management journey where we started from zero with the greenfield approach with Kong Konnect to today's environment. We initially had the goal of starting up with a central point of contact where all our microservices are attached to. Then we found out transparency is key.

More and more microservices have been developed, and we needed to make sure that they can be found. Using the dev portal, enabling our services with zero trust is really urgent, and putting a central monitoring and logging platform in between enables all microservices to not care for everything. They can exclude specific parts. That's where they rely again on faster time to market, faster, and more resilient development.

In the end, the API ops process, which we introduced, to really automate all of that. But that's where we'll give you an overview now for two last slides on my side for the moment on the technical overview.


Architecture

Sven: I wanted to first show you our current architecture.

That is really key to understanding the basics behind that we can then continue diving deeper into our specific processes.

Siemens Digital Industries is based on Kong Konnect today. Everything we're using, we're using Kong Konnect for it - for vitals, for the manager where our customers and our businesses can really do manual stuff on, the dev part which really exposes our APIs in a central platform, and using the admin API for the whole automation. 

With Kong Konnect in place, the architecture was clear that we need to set up our own clusters for data sensitivity reasons. That's why we decided on Amazon Kubernetes to really create specific Kubernetes clusters for each environment. This means a production cluster, a test cluster, and the dev cluster, and most of them (production and test cluster) are in high availability mode and sync to each other.

What we can now see on that screen is that, of course, the vitals, which are displayed in Kong Konnect, are coming from the data planes to Kong Konnect and the configurations, which are provided manually or by our GitLab flows, will then be pushed to our data planes via Kong Konnect. So that is really interesting, the full process, and that's how we set it up.


At the end, that is our key, our core at the moment, our own developments around GitLab, around the GitOps process, and how we are really coming from open API specifications to an implementation in our data planes. So how to get the APIs from our businesses into that central platform? That is mainly the question where I want to introduce you a bit more deeper into our scaling approach for API management with our API ops flow.

How to scale in federated architectures

Sven: On the very left side, you can see customer enablement. We are having our own customer enablement available where we are really onboarding our businesses and our customers who want to expose their APIs. In the second step, we are delivering an open API template. The open API specification is a template which they can easily reuse. From that, they can derive their open API specification and do their own tests, so the customer linting. Afterwards, they're going to do some manual tests around their integrations, and then they will be able to merge their API specs to our source control system.

And with having that within our CI/CD pipeline, we are leaving the exploration and provisioning phase and getting over to our API ops core, which means with that API specification in place, we're doing a full transformation of Kong configuration, which will be created, the observability configuration, which we're creating based on the API spec, and also the logging configuration, which is directly created based on the API spec.

As soon as this is done, we'll do different deployments and a validation based on the API linting. We're doing smoke tests and also intel tests to make sure that the majority of that API is really in that state we're expecting it. If that's the case, we're then deploying it to the production data plane to our dev portal, but also for a backup creation.

That's where I want to now give over to Daniel who will give us a deep dive on our API ops process.


Platform automation deep-dive

Daniel: Thanks for the introduction. Now we will dive deeper on these API ops processes that allow us to focus more on our customers' needs and less on the technical details of our tasks.

So taking a deeper look at the API ops pipeline we showed earlier, we can see we have these five steps at the top. They are run for each of our environments. In the first step, we gather the services that are going to be enabled for that environment. We generate the Kong configuration from each of the open API specs with decK, merge everything into one file, and just run decK sync to upload it to Kong Konnect. In this step, we also upload the open API specs to our developer portal.

In the next step, we deploy our pods in the Kubernetes cluster using Helm and attach Kong Helm chart. To check everything is still reachable, we perform a smoke test on the specific environment URL, Finally, we perform a request to each of our customer services, first, by directly calling them to check for any issues that are not related to us, and then to to our API management solution to check if our configuration is still correct and working as expected. In the end, we just update our observability, which we will get into very shortly.

Our approach on API ops is depicted on the diagram in the bottom. We have an initial linting phase where we check our API specs for errors and inconsistencies. Here we are just using spectral with some of the default rules enabled and some rules we defined according to Siemens's own governance policies.

Next, we perform the steps we mentioned earlier in our dev environment. If everything goes just fine, we move on to our test environment. On tests, it's the same but we can roll back if needed, and the rest of the pipeline is just aborted.

Finally, if there is a change on our main branch, since it's our source of truth, we are able to deploy production. However, we first must ensure that everything is functioning correctly, so we perform an inso test just like the one we performed after deploying. And only if this succeeds, we perform the configuration change and the deployment. And here, a rollback can also take place.

As the last step, a release is created with an appropriate version number and deploy the changes.

Looking now at the observability update step, it just gathers the services that are enabled for the current environment and commits them to a file in a separate repo, our observability repo.

This commit will then trigger a pipeline that runs our Terraform module, which we placed on the Grafana one. This module has a predefined set of alarms. We define them and issue alarms for services and a dashboard.

When it runs, it will then check if there are service alarms and dashboards that need to be added or removed from our Grafana organization, and we do accordingly in a totally automated manner. Let's say we've added three services and our set is of five alarms, it will automatically create fifteen alarms and three dashboards without any intervention from our team.

Since we are serving multiple customers with really different needs, we needed to organize our repos to support this federated approach. So we have our main repo, which is the core of our service and contains all the best configurations and all the templates and rules we apply to our gateways and API specs. The CI templates, the linting rules, even all the scripts that we run on the CI are stored in here, as well as our test cong Helm chart.

This is also where our customers have all the information they need to implement our APIOps processes in their own flows. In addition, we have one repo for each of our data plane sets. Each contains the specific customer open API, their configurations, the comp plugins, the configuration specificities for the data planes, for the infrastructure, as well as the deployment pipeline we just showcased.

To keep these multiple data planes up to date, we needed to define yet another CI flow that runs once every week, which we call our maintenance. Since we are not constantly deploying to production as it would increase the risk of misconfigurations and down times, and we don't want that. This is the way we keep our product environments up to date with minimal disruption and without letting our customers wait too long for changes to be deployed.

The pipeline starts by defining a version tag in the main repo, which will be shared by all the data plan repos and will allow us later to relate all the changes in one place.Then it creates a child pipeline for each data plan repo that will run simultaneously. The pipeline that runs on each repo is the same we saw previously and has the steps we are seeing at the top for each of the environments.

Each of them is running independently, which is perfect for us as delays in one pipeline won't delay the others. So let's say data plan one is taking too long to deploy to production. It will never affect data plan number two so the changes in this one can reach the production environment much quicker.

The parent pipeline, the one that runs on the main repo, is the one that will be waiting for all the child pipelines. If the delay occurs, it will take longer but it will wait for everything to finish either successfully or with errors or rollbacks. Finally, it creates a release on this main repo, aggregating all of the changes that occurred since the last one, and it will reference the releases of the child pipelines if they were successful. If they weren't, they wouldn't be mentioned.

This is all possible due to GitLab's trigger feature using the depends strategy, which waits for everything to finish to resume the execution of the parent pipeline. Our customers can select the templates they want to use from our main repo, adapt them to their needs, and integrate them in their already existing workflows as they need. With this, they can get early feedback on their open API specifications before reaching out to us.

Each customer is given an access token for the data plan repo they are using and only for that one. This allows them to change the specification, the configuration whenever they need to, and also allows them to create automatic merge requests to us. When they are happy with the changes, the merge request can be automatically flagged as ready to revenue from their repo, allowing us, the platform team, to perform a manual inspection and merge it if it fulfills our quality assurance criteria.

In this slide here, we have an example on how to integrate the merge request flow I just mentioned into a customer repo. The customer is free to put any rules in place and in the CI stage they desire to make it suit their already existing pipelines.

As we can see here from lines one to four, they just need to include our API template and our project pointing to the main branch. Then they must include the “.openapi_upload” job and define these variables that we documented in the template - one stating which repo to commit to and the other path to the open API spec in the repo. We don't see the third one as it should be stored as a secret, hence the comments on line eight.

Finally, they just include the “.review_mr” job, and it should be done. In this example, we put it as a manual step as we wanted to keep it real and this avoids marking the merge request every time the pipeline runs, and that's something we don't want.

As we can observe, this can be done in under fifteen lines if we do not account for all the spacing in the comments. This example is just for the merge request flow integration, but it's very similar to all the other ones we have for the CI.

And now I'm giving it back to Sven to explain the trade offs of our approach.

Sven: Thank you very much, Daniel. It's always very interesting to get those details.

I'm now going further to introduce you to the right balance and our organizational structure which we're having behind.


APIs - Finding the right balance

Sven: First of all, for us at Siemens, from the beginning it was quite often questioned and also a challenge to find the right balance between speed and consistency.

Speed is the one thing. With increasing speed, we can introduce more customers to our API management platform but we also need to ensure consistency to apply governance rules, to really care for a standard which everybody can guide to. And that's where we found an organizational view which we are currently using which is our North Star at the moment. That means the API platform is on the bottom line.

We have integrated a governance layer in between, which means the API platform team is not the one who's taking all the API specs from the customers, reviewing it, and giving it back saying, “You need to please match specific Siemens rules”, and then we're approving and proving that once again. We decided to have a platform team who is supporting that governance layer, which means the API platform team defines governance rules, guidelines, restrictions, and templates as Daniel showed. With those formations in the governance layer, the dev teams can get the templates downloaded and can start their development. They can also use our linting, our central processes, to check if their API is spec compliant and matching the prerequisites they need to acquire from an Siemens maturity point of view. That's where the dev teams are then getting automated direct free feedback from the governance layer via our supported processes.

That is quite cool, and also gives us a bit of the freedom of really scaling it up without scaling the platform team at the same time.


Results

API Ops - business benefits

Sven: In my last slide, I will get to the benefits we're seeing in an APIOps approach. An APIOps approach is one where we also see high business benefits and that's not always usual. With IT systems implemented, we're always aiming for technology with purpose. We're aiming to deliver business benefits. When I ask our product owner, I'm always hearing, “Wow, what you established with an API ops process is really increasing speed. You're having a full end to end automation.”

That is the reason why we increase the speed. We can fully automate it end to end with our delivery pipelines. That means the onboarding is quicker, the operation is simplified and standardized, and it's highly scalable, because for us, it doesn't matter if one comes or if ten different API integrations come or more. They can all be done automatically.

To improve consistency, of course, also from a governance point of view, we need to establish specific standards. We have Siemens’ rules in place, and we need to make sure that every microservice matches those rules. That is where we, with our CI/CD pipelines, can really make sure that those specified standards are approved against our rules.

In the end, lower costs. Faster time to market and lower costs. That's in a direct hierarchy. We are really reducing our time to market. That's where we need lower costs. We have smaller teams and the automation is standardized.That is the reason why we are further pushing APIOps from a technical but also business point of view. And with that, I'm ending our presentation for today. That was our kind of deep dive on our API ops approach.

Thank you all for joining. For us, it was a pleasure being here today. Thank you and we're really happy to get  in contact with one another so do not hesitate to get in contact with us. We're happy to exchange experiences.


More Customer Stories

How Aviva Built a Brilliant Customer Experience Case Study

"Kong provides that single point of entry... [so] it’s easier for Aviva to manage all the traffic that comes through safely and securely."

Gurinder Parmar
Senior Engineering Lead
Raiffeisen Bank International (RBI) achieves business-wide integration with Kong Case Study

“Kong’s ability to be used in many different use cases, across multiple infrastructures, with a built-in lightweight design, allows us to achieve this while keeping costs as lean as possible.”

Thomas Joham
Delivery Manager of RICE (Real Time Integration Center of Excellence)
Aareon securely opens its API ecosystem with Kong Gateway Enterprise Case Study

“Partnering with Kong has greatly accelerated our API business and strategy.”

Martin Habib
Head of Development
Fubon Financial Reduces API Security Risks with Kong Gateway Enterprise Case Study

"Kong provides flexible deployment and has a great user experience interface."

Su Chingwei
Chief Information Officer
MNRB Reduces Costs and Secures Its Systems with Kong Case Study

"Kong's end-to-end API management platform provides us with the ability to digitally transform our business."

Mohd Khairul Hairi B Mazlan
Head of DevOps
Australia Post Delivers a Streamlined Platform Onboarding Experience With Kong Gateway Enterprise Case Study

"With Kong, we've taken a big leap forward in our ability to reduce operational overhead while efficiently scaling to support the increase in traffic requests from our rapidly growing customer base."

Neha Jaiswal
Product Manager, API Platform & Central Services
League simplifies API management with Kong Case Study

"Kong has allowed our healthcare platform to remain highly secure as we rapidly scale"

Raymond Mendoza
VP Engineering

Get ahead today

While others catch up, you could be leading. Discover how Kong's platform can accelerate your digital transformation and drive innovation at scale.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance
  • © Kong Inc. 2025