7 Signs Your Kafka Environment Needs an API Platform
Managing Kafka as an island on its own got you this far. But scaling it securely and efficiently across your organization? That's another matter entirely.
Apache Kafka is the number-one event streaming platform used by developers and data engineers worldwide to build reliable, scalable real-time data pipelines and event-driven applications. With Kafka, event producers can reach many consumers, each of which can react to events instantly, and power use cases like fraud detection, personalization, and system monitoring.
However, as powerful as Kafka is for moving massive amounts of data around, it does come with some major challenges. Luckily, an API platform can help solve some of these challenges.
The question is: Is your specific Kafka environment a good candidate for an API platform? Here are seven signs your Kafka environment would benefit from the discipline of API management:
1. Sharing Kafka environments between teams is hard
Logically, some teams could share the same Kafka infrastructure, but organizational boundaries and differences in the data that each team should see often get in the way. In the past, you quickly jumped to creating a whole new set of topics or, worse, entirely new clusters to meet these needs. Of course, this also means a unique set of ACLs and other security controls to manage.
How can Kong Event Gateway help?
Kafka wasn’t built for easy multi-tenancy, and ACLs are limited and complex to configure at scale. Kong Event Gateway provides abstractions such as Virtual Clusters and Virtual Topics to create logical isolation between teams without duplication of data or infrastructure promoting sharing across multiple teams and reducing cost and complexity.
2. Non-Kafka developers find it hard to use real-time data
Are you building real-time applications without extensive Kafka protocol knowledge? Or perhaps your applications are already HTTP-based, and setting them up as Kafka clients would just add confusion and/or more engineering time? You’re likely building custom connectors, writing extra glue code, or spinning up dedicated proxies just to bridge the gap. These workarounds slow you down.
How can Kong Event Gateway help?
With Kong Event Gateway, you can expose Kafka streams as event APIs that communicate over non-Kafka protocols and styles (i.e., HTTP/REST, Server Sent events, Webhooks, WebSockets, etc.) without rewriting your apps. This enables your developers to expose, produce, and consume real-time data from Kafka in a manner that works for your specific applications and architecture. Kong meets you where you are.
3. Security teams block external access to event streams
Kafka was built for trusted internal clients and not for the modern world of SaaS apps, partners, and regulated data sharing. With sensitive data flying around, you naturally want to keep access tight. Given Kafka’s limitations around auth and external access, organizations are often stuck building custom authorization layers or missing out on the opportunity to productize real-time data for external consumption
How can Kong Event Gateway help?
With Kong Event Gateway, you can apply the same enterprise-grade security you use for APIs — OAuth2, JWT, API keys, encryption — to Kafka streams automatically by policy. Get one unified platform for APIs and event streams that integrates existing identity platforms and security best practices.
4. Managing change can be a nightmare
Kafka configuration is notoriously hard to change once you get going. Producers and consumers have tight coupling to specific topic names, brokers, and partitions. Do you want to change a topic name or message header? Perhaps you want to move certain topics to different clusters? Will this change break the various applications that connect to your cluster — or at least the ones that you know about? Does every client application need to know about and agree to this change?
How can Kong Event Gateway help?
Kong Event Gateway provides an abstraction layer, avoiding direct access to Kafka clusters and event brokers, meaning that many changes can be made exclusively in the gateway without changing clients or Kafka broker configuration. Virtual Topics provide differentiated access to the same topic — change headers, encryption, or redaction policies for one application without affecting others. Decoupling your applications from your Kafka infrastructure enhances agility and simplifies change.
5. Documentation is difficult to find, making reuse challenging
Without great documentation about what’s in your Kafka environments, teams will default to recreation rather than reuse, resulting in silos, slower innovation, and increased costs. Where do you go to discover what topics and event streams already exist? Are you allowed to reuse a topic? Is there an Async API specification? Who owns the topic? How do you get access? Without easy answers to these questions, reuse is rare, and every team works in isolation.
How can Kong Event Gateway help?
An API platform like Kong is more than just a proxy. It includes self-service developer portals *, which allow developers to find documentation and request access to event APIs. This means that the developers who are building real-time applications can easily find, learn about, and start consuming real-time event streams–all in a self-service manner.
6. You lack visibility and observability
Your event streams are becoming ever more critical, which makes it even more important for platform and infrastructure teams to know what’s going on. Security teams demand more audit and governance as more participants, particularly external, join your EDA.
How can Kong Event Gateway help?
Kong Event Gateway is more than just a proxy. Kong Konnect’s observability solutions give platform teams the data they need to quickly diagnose issues, and audit access — all in one unified platform, alongside other APIs *.
7. You need to consider public cloud in your event-driven architecture
Perhaps you're moving your Kafka environment to the cloud, such as AWS Managed Services for Kafka (MSK). Perhaps you need cloud participants, such as SaaS applications like Salesforce, customers, or partners, to participate in your event-driven solution. You need to be able to authenticate these third parties securely. These clients may speak HTTP or Kafka native protocols. You need assurance that third parties can only see and access data they need to see. Your Kafka environment is sitting in a private network, and you need to be able to provision connectivity to clients outside your network. Configuring Kafka to meet these needs is less than straightforward.
How can Kong Event Gateway help?
Use Kong Event Gateway to proxy your Kafka clusters. By only exposing Kong’s Event Gateway and keeping your Kafka infrastructure in your secure private network, you're reducing the vulnerable surface area. Configure policies such as OAuth or SCRAM authentication controls in the Gateway — without modifying your Kafka configuration directly. Provide simple HTTP API access to your Kafka topics. With Virtual Clusters and Virtual Topic policies, you can restrict which topics are visible to each consumer, and even apply encryption with specific keys.
With Kong, you can be much more confident and agile in your cloud-based event-driven applications, opening up more innovative solutions.
Is it time to rethink how you manage and expose your Kafka data?
If any of these sound familiar, it’s time to rethink how you manage and expose your Kafka data–and the API Platform might be the first place to start.
A modern API platform solution transforms Kafka from a complex event-streaming engine into a secure, self-service platform for real-time data and event streams.
If you're interested in finding out more about Kong’s API platform for EDA, take a look at our blogs on Kong Event Gateway or request a demo.
* Note: Kong Native Event Proxy, part of Kong Event Gateway, is an Early Access Product. Some features referenced in this blog will be rolled out incrementally this year.
Unleash the power of APIs with Kong Konnect
