How We Built It: Managing Konnect Entities from K8s Clusters with KGO
Patryk Małek
Sr Software Engineer , Kong
We recently released Kong Gateway Operator 1.4 with support for managing Konnect entities from within the Kubernetes clusters. This means users can now manage their Konnect configurations declaratively, through Kubernetes resources powered by Kong’s Custom Resource Definitions. For example, here’s how you can use a KongConsumer resource in Kubernetes to configure a Consumer in Konnect.
You can read more about using the Operator to declaratively manage Konnect in the Kong Gateway Operator docs. In this blog post, we’ll go through the design of this feature and how we’ve built it.
API design
In order to interact with Kubernetes using types that model Konnect entities, we have to define CRDs for each of the types that it supports. This will allow users to have strongly typed resources to work with.
There are several CRDs that are already supported by KIC. These are defined in configuration.konghq.com API group and could potentially be used for Konnect entities support but we’d need to clearly mark the line which types are supported in KIC, which are supported by the operator, managing Konnect’s objects and which are supported by both.
An alternative was to define a new, isolated API Group dedicated to Konnect. This would allow for a clear partition between Konnect and self-hosted Kubernetes configuration but it would require users to learn, install, and manage yet another set of CRDs. We wanted to avoid that so we’ve decided to use the first approach.
We’ve decided to set this line at the Konnect product level and following these two rules:
Anything that lives as part of a control plane’s configuration is still going to use configuration.konghq.com API Group,
Anything else that constitutes an entity on its own in Konnect will live in a brand new API Group: konnect.konghq.com
This means that apart from the newly introduced CRDs, users will be able to use types that they are already familiar with, like KongConsumer, KongPlugin, etc.
These new CRDs are available at https://github.com/Kong/kubernetes-configuration. Types from configuration.konghq.com are backward compatible with KIC’s CRDs that many of you are already familiar with.
Control plane references
In order to satisfy the above-described approach, we’ve decided to extend each type that lives as part of the control plane’s configuration, with a control plane reference. To continue supporting resources without this field set, we made it optional with the default type set to kic — indicating that the object is to be reconciled by KIC. So a KongConsumer can still be used as:
In order to make your resources refer to a locally managed Konnect ControlPlane, you can use the control plane reference with type set to konnectNamespacedRef, like so:
The next piece of the puzzle is the actual communication with Konnect and the underlying Konnect API types. For this, we use our sdk-konnect-go package.
Thanks to speakeasy’s Go SDK generation, this SDK is almost entirely generated from our OpenAPI spec which is the source of truth for our APIs. This allows us to focus on the robustness of our kubernetes operator and leave the API specification to teams that provide those.
We leverage Go types generated from the OpenAPI specification in the underlying CRD types where possible so that we get autogenerated CRDs when there’s a change in API.
While some of the types do not require additional validation in the form of those markers, some do. For those we resort to redefining the types and adding kubebuilder markers for CRD validation for native Kubernetes object validation.
// KongCredentialJWTAPISpec defines the specification of a JWT credential.type KongCredentialJWTAPISpec struct{// Algorithm is the algorithm used to sign the JWT token.// +kubebuilder:default=HS256// +kubebuilder:validation:Enum=HS256;HS384;HS512;RS256;RS384;RS512;ES256;ES384;ES512;PS256;PS384;PS512;EdDSA Algorithm string`json:"algorithm,omitempty"` …
}
We hope to solve this problem soon and to have one source of truth for our types.
Reconciliation loop’s design
Following Kubernetes operator pattern — as we’ve done so for every other controller in our operator —we’ve designed a control loop which takes the provided resource spec (the desired state) and aims to progress the Konnect entity state (the observed state) to match the former.
The idea behind it is pretty simple (simplified code):
if result, err :=hasReference(ctx, client, obj); err !=nil{return ctrl.Result{}, err
}elseif!result.IsZero(){// Requeue, an update has been performed.return result,nil}if delTS := obj.GetDeletionTimestamp();!delTS.IsZero(){// Handle deletion, remove object from Konnect, clean up finalizer.// ... return ctrl.Result{},nil}if s := ent.GetKonnectStatus(); s ==nil|| s.GetKonnectID()==""{// Try to create the object in Konnect.// Handle conflicts, adopt remote object with matching UUID metadata.// ... return ctrl.Result{},nil}// Object has already been created.timeFromLastUpdate := time.Since(condProgrammed.LastTransitionTime.Time)if timeFromLastUpdate <= syncPeriod { Return ctrl.Result{ RequeueAfter: syncPeriod - timeFromLastUpdate,},nil}// Enforce the config in KonnectUpdate(ctx, sdk, obj)
Fortunately, making the controller continuously reconcile an object is trivial using controller-runtime’s Result.RequeueAfter.
One important thing to note is that the operator will always defer back to Konnect as the source of truth if conflicts arise. This is especially important with using cached clients in one’s controllers, which is the default using controller-runtime’s client.Client.
Code generation vs Go’s generics
In order to support the plethora of object types that Konnect APIs allow its users to configure, we needed to choose an approach to be type agnostic (to a degree) in our implementation and not implement the same controller for each type.
We’ve decided to use Go’s generics to leverage strongly typed code, have full control over type constraints but also to minimize the amount of code in our repository.
Our implementation uses two core type constraints:
type SupportedKonnectEntityType interface{ konnectv1alpha1.KonnectGatewayControlPlane | configurationv1.KongConsumer |...// NOTE: Omitting for brevity}
and
type EntityType[T SupportedKonnectEntityType]interface{*T
client.Object
// Additional methods which are used in reconciling Konnect entities.GetConditions()[]metav1.Condition
...}
The distinction between the two came from the fact that some functions ( that we had no control over) required pointers and some values to work with.
This in turn allowed us to write our reconciler like so:
type KonnectEntityReconciler[ T constraints.SupportedKonnectEntityType, TEnt constraints.EntityType[T],]struct{ Client client.Client
...// NOTE: Omitting for brevity}func(r *KonnectEntityReconciler[T, TEnt])Reconcile( ctx context.Context, req ctrl.Request,)(ctrl.Result,error){var( e T
ent =TEnt(&e))if err := r.Client.Get(ctx, req.NamespacedName, ent); err !=nil{return ctrl.Result{}, client.IgnoreNotFound(err)}...}
We were able to then instantiate a reconciler for each of the supported types. Setting up watch criteria for each type at this point was a matter of:
func ReconciliationWatchOptionsForEntity[ T constraints.SupportedKonnectEntityType, TEnt constraints.EntityType[T],]( cl client.Client, ent TEnt,)[]func(*ctrl.Builder)*ctrl.Builder {// We couldn’t avoid type-switch due to the limitations of Go generics.switchany(ent).(type){case*konnectv1alpha1.KonnectGatewayControlPlane:returnKonnectGatewayControlPlaneReconciliationWatchOptions(cl)...}b := ctrl.NewControllerManagedBy(mgr).for_, dep :=rangeReconciliationWatchOptionsForEntity(r.Client, ent){ b =dep(b)}return b.Complete(r)
Generics limitations
Generics can only get you so far. They're very useful for deduplicating code and can immensely help with making your codebase smaller but they also have their limitations.
The biggest drawback is that you can't specialize your code without resorting to type switches or interface assertions to verify whether a particular reconciliation step should be performed for the type in question.
For example: most of our types have a Control Plane reference but not all (Control Planes themselves do not reference each other, with the exception of Control Plane groups). To check this we can either check each type individually —which is error prone — or generate a GetControlPlaneRef() method on each of those types and perform a type assertion verifying the object implements it:
type EntityWithControlPlaneRef interface{GetControlPlaneRef()*configurationv1alpha1.ControlPlaneRef
}func getControlPlaneRef[ T constraints.SupportedKonnectEntityType, TEnt constraints.EntityType[T],]( e TEnt,) mo.Option[configurationv1alpha1.ControlPlaneRef]{ entWithControlPlaneRef, ok :=any(e).(EntityWithControlPlaneRef)if!ok {return mo.None[configurationv1alpha1.ControlPlaneRef]()} cpRef := entWithControlPlaneRef.GetControlPlaneRef()if cpRef ==nil{return mo.None[configurationv1alpha1.ControlPlaneRef]()}return mo.Some(*cpRef)}
This is far from ideal but until Go implements generics specialization, we have to live with it.
Conclusion
We hope that this was insightful and now you know how Kong Gateway Operator manages Konnect entities from Kubernetes clusters.
If you'd like to read more about the topic, please visit the links below. We’re always looking for feedback so in case there’s anything you’d like to share with us please open an issue or reach out to us through the support channel.
With Kong Ingress Controller, when your Control Plane was hosted in Kong Konnect, and you were using Kubernetes Gateway API, your dataplane, routes, and services were in read-only mode. When using Kong Ingress Controller with Kubernetes Gateway API
Justin Davies
Building a First-Class Kubernetes Experience in Kong Konnect
Simplify operations and scale with confidence To unlock Kubernetes’ full potential, many enterprises are relying on three key building blocks available in Kong Konnect today: Kubernetes Ingress Controllers: Ingress controllers are used for managing
Adam Jiroun
Farewell Ingress NGINX: Explore a Better Path Forward with Kong
"To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee are announcing the upcoming retirement of Ingress NGINX . Best-effort maintenance will continue until March 2026. Afterward, there w
Justin Davies
Kong Gateway Operator 1.5: Better Together with Konnect
Kong Gateway Operator (KGO) is the most effective way to install, upgrade, scale, and manage a Kong Gateway or Kubernetes Ingress. The latest release of the Kong Gateway Operator brings several updates that streamline integration with Kong Konnect
It’s here! The Kong Gateway Operator release we teased at API Summit is now available for you all to use. KGO 1.4 allows you to configure Kong Konnect using CRDs, attach your DataPlane resources to Konnect with minimal configuration, and even ma
Michael Heap
Insights into Kubernetes Deployments with Kong Ingress Controller
This blog addresses the common challenges organizations face with fragmented API management in Kubernetes environments and presents Kong Konnect combined with the Kong Ingress Controller (KIC) as a comprehensive solution. We'll highlight the issues
Declan Keane
Announcing the Kong Konnect Mesh EKS Blueprint Add-on
Zero to Hero on Amazon EKS with Konnect’s Mesh Manager
We’re excited to announce a new addition to our Kong Konnect EKS Blueprint Family: the Kong Konnect Mesh EKS Blueprint Add-on to deploy your Mesh Zones. Deploy your zones securely on AWS with
Danny Freese
Ready to see Kong in action?
Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.