Gateway API: From Early Years to GA
In the Kubernetes world, the Ingress API has been the longstanding staple for getting access to your Services from outside your cluster network. Ingress has served us well over the years and can be found present in several dozen different implementations, but as time has passed and Kubernetes has grown it's become clear that there exists a greater need than Ingress is able to deliver.
Due to the limits on the scope and specification of available upstream ingress, individual implementers had no choice but to use annotations or arbitrary metadata placed on Kubernetes resources alongside their own custom resources for a majority of their implementations. This "annotations wild west" resulted in situations where any two given `Ingress` resources were more often than not even close to being portable between different implementations.
I'm Shane Utt, and I'm a maintainer of Gateway API along with my co-maintainers Rob Scott (Google) and Nick Young (Isovalent). In this post, we'll share the story of how contributors across various organizations came together to create the next generation of Kubernetes networking APIs in upstream (Gateway API), and how that project grew to become the most collaborative API in Kubernetes history.
Starting point
Towards the end of 2019 at the San Diego KubeCon, a group of people from the Kubernetes Networking Special Interest Group (SIG Network) including Kong's own Harry Bagdi came together from different organizations and backgrounds to discuss the needs of Kubernetes users. This group made history by kickstarting the successor project to Ingress founded on the goals of being:
- Generic
- Expressive
- Extensible
- Role Oriented
One of the biggest challenges with building out the spec was trying to keep up with high velocity and collaboration for a completely new API in upstream Kubernetes. Due to its widespread adoption, Kubernetes itself was slowing down a lot in terms of the process it takes to get new features in.
With that in mind, how were we going to generate the speed required for a greenfield project within the confines of a much larger project that was moving toward long-term stability? The answer ended up being in the extension systems for Kubernetes: we decided to develop the project as an official Kubernetes project, but using Custom Resource Definitions (CRDs) to deploy them instead of adding them to the core code base. This made the Gateway API act like a flexible add-on, but remain an official Kubernetes project at the same time
To this day our original goals have remained largely unchanged, and in fact, are mostly further extended. Harry and his peers helped to kickstart the concept and became one of the founding maintainers of the Gateway API project, which set in motion a series of events that would change networking in the Kubernetes world forever.
Early years
In the first formative years of 2020 and 2021 the project received massive attention from the community and quickly gained adopters who added support for the alpha APIs to their ingress solutions. It was during this time that Kong’s Kubernetes team added the initial alpha-level Gateway API support to our Kubernetes Ingress Controller (KIC) and made this available under a feature gate for users to try out and provide feedback.
Unlike Ingress which was a single API, the Gateway API evolved to include several individual and interoperable APIs which aligned with the different roles that could be fulfilled:
GatewayClass
— for indicating which implementation was responsible for Gateway resources and their*Routes
, and is intended to be managed by infrastructure providers.Gateway
— for managing the lifecycle and configuration of proxies, load-balancers, e.t.c., This is designed to be managed by cluster operators.HTTPRoute
,TCPRoute
andUDPRoute
— for Layer 7 and Layer 4 traffic with the intention of being managed by application developers.
Unsurprisingly, Layer 7 HTTP functionality was a clear winner in terms of adoption based on the number of implementations and grew to be our most extensible and active API for ingress solutions, as well as being the greatest single alternative to the venerable Ingress resource. During this time we greatly exceeded the capabilities of Ingress by adding new features that are not available in the classic APIs, including but not limited to:
- Additional matching criteria
- Header matching
- Method matching
- Query parameter matching
- Security capabilities
- Cross-Namespace binding
- Cross-Namespace forwarding
- Brand new functionality
- Header Modifiers
- Request Mirrorining
- Request Redirects
- URL Rewrites
- Weight-based traffic splitting
This list kept growing and positive feedback continued to flow in. Our commitment to the project matched its growth and we provided both project stewardship and technical direction to keep the project moving forward. Kong’s commitment to the project further led us to the development of the Kong Gateway Operator (KGO) founded on a "Gateway API first" principle. This included support for GatewayClass
, Gateway
, HTTPRoute
, TCPRoute
and UDPRoute
from the Gateway API project as the fundamental APIs used to deploy and manage the lifecycle of Kong control planes (KIC) and data planes (Kong Gateway) on Kubernetes clusters, all built based on Kubernetes operator principles.
The community continued to grow through these years and the Kubernetes networking SIG continued to iterate. The sustained growth of implementations in 2021 was proof enough for us to move our most commonly implemented APIs ( GatewayClass
, Gateway
and HTTPRoute
) to beta and from there the adoption continued to accelerate.
Critical mass
In 2022 the Gateway API project became a lightning rod for the community with an enormous amount of attention devoted to it. The buzz was growing, and a litany of new implementations came along adding support to their projects. There were now more than 20 implementations supporting Gateway API, and that number kept growing. Gateway API had become a "hot topic" at Kubecon with several talks being given on this subject.
Continuing our investment in the Gateway API, Kong developed a load-balancer named "Blixt". This project uses eBPF as the data plane and Gateway API as the only control plane API that could serve ingress traffic at blazingly fast speeds (the word blixt, in fact, means "lightning flash" in Swedish) with support for the TCPRoute
and UDPRoute
Layer 4 ingress options. This project had a variety of goals, many of which revolved around exploring burgeoning technologies. But the main goal quickly became supporting Layer 4 in the Gateway API. From here we donated this project to the Kubernetes SIGs community in order to provide a CI(?) and testing tool, as well as a reference implementation for Gateway API control planes.
While the Gateway API project was in flux during this time, we also saw a huge growth in the scope and goals of the project. Perhaps the most notable change was the inception of the "Gateway API for Mesh Management and Administration (GAMMA)" sub-project. Until this point, Gateway API resources had been only considered for use in ingress/north-south network traffic topologies, but then GAMMA blazed a new trail and several implementations started to experiment with using HTTPRoute
for traffic within the service mesh. Members of our mesh team had already added support for ingress using Gateway API to Kuma and Kong Mesh. The team further contributed to the project by helping to experiment and iterate on GAMMA to provide HTTPRoute
support for east/west traffic.
With the project being further along in beta and showing no signs of slowing down, it was clear that this was in fact going to be the future of Kubernetes networking, especially now that we saw our APIs were generic enough to be applied effectively to other networking contexts like mesh. The API was ready to deliver on the goals for a first major "generally available (GA)" release, and "the road to GA" had begun.
The road to GA
In the Spring of 2023 at KubeCon Amsterdam, we decided that our goal would be to ship v1.0.0
prior to KubeCon US in Chicago in November We knew this was a tall order, but the timing was right to make a commitment to stabilize the release as a number of organizations were already running production systems using Gateway API beta resources and the investment of everyone in the community needed to culminate to a stable way forward. During this time period, the maintainers met regularly (particularly in the last months, when we met weekly) to organize, groom, and strategize on getting this release out the door. There was no decline in interest from the community with regard to new features during this time, so we had to make sure we were receptive to those ideas and serve the community in all capacities while trying to keep things focused for the release.
Today the future is here as Gateway API has just reached its first generally available (GA) version of v1.0.0 in time for Kubecon Chicago, almost exactly four years later. The Gateway API project found unmatched community support from the moment of its inception and now steps into the limelight to provide the foundation for the future of Kubernetes networking. This is, however, only the beginning! The Gateway API community will continue to develop and mature with more features to keep providing value to our users. This is our commitment to our own users as well as we continue to be committed to upstream development.
Thank you to everyone who has helped Gateway API become what it is today, and we look forward to a continued commitment to maintain and grow the project from here so that it may serve all Kubernetes users well!