Engineering
May 13, 2024
9 min read

Enterprise-Grade Service Mesh: A Reference Architecture with OpenShift, Istio, and Kong

Claudio
Claudio Acquaviva
Principal Architect, Kong
Youtube thumbnail

The service mesh architecture pattern has become a de facto standard for microservices-based projects. In fact, from the mesh standpoint, not just microservices but all components of an application should be under its control, including databases, event processing services, etc.

It's critical to analyze end-to-end service mesh infrastructure from two main perspectives:

  • The traffic within an application: Also called the east-west traffic, this is the main purpose of a service mesh implementation. We should be able to apply multiple policies to define how the service mesh components should talk to each other considering security as well as requirements concerning traffic control, observability, and more.
  • The service mesh exposure: Typically, the mesh components and the actual communication among them are protected from external consumers. However, we have to expose at least one of its components so the mesh can be consumed. That's the role of a specific mesh component responsible for the north-south ingress traffic. This ingress traffic component is responsible not just for the mesh exposure, which is its natural purpose, but for implementing specific policies we should have in this layer, including multiple consumer authentication mechanisms (e.g., API Key, OIDC, mTLS), request throttling, mesh consumption metrics, etc.

In this blog post, we’ll present and describe a service mesh reference architecture based on Red Hat and Kong technologies and products, where the main actors, Istio Service Mesh and Kong Ingress Controller, run on a Red Hat OpenShift Container Platform (OCP) Cluster.

Service mesh platform and reference architecture

One of the most robust platforms to implement and deploy applications and service meshes available today is Red Hat OpenShift Container Platform (OCP)

Based on Kubernetes, Red Hat OCP provides a trusted, comprehensive, and consistent application platform for hybrid cloud that is capable of running single or multi-cluster service meshes. Below, we will detail the implementation of Kong technologies, Konnect, and KIC with Red Hat OpenShift for building modern applications.

Taking the perspectives described previously, on top of Red Hat OCP, the service mesh infrastructure consists of two products:

  • Red Hat OpenShift Service Mesh: Based upon Istio Service Mesh, this is responsible for the actual service mesh providing all functions required such as monitoring, tracing, circuit breakers, load balancing, access control, and more.
  • Kong Ingress Controller (KIC): Also considered a service mesh component, this exposes the Istio Service Mesh with an extensive collection of policies like authentication, request transformation, response caching, rate limiting, traffic monitoring, logging, and tracing.

We should also consider two more layers implemented by:

  • Keycloak: With a KIC Gateway integration, it plays the Identity Provider role to, as such, externalize the external consumer OIDC-based authentication and authorization processes.
  • Kiali: As the Istio Service Mesh monitoring and management console. Kiali uses Prometheus and Grafana to generate the topology graph, show metrics, calculate health, offer advanced metrics queries, and more.

The figure below illustrates the reference architecture:

Modern microservice implementation

Typically, as the microservice project progresses, naturally we have to manage three types of connections:

  • Edge connectivity: This is the traffic we receive from external consumers. This connectivity is implemented and controlled by Kong Ingress Controller (KIC).
  • Cross-app connectivity: Applications will talk to each other, so it’s important to control this communication as well. Another instance of KIC is deployed to control it with specific and application-oriented policies. Note: This isn’t depicted in the diagram.
  • In-app connectivity: This is where the service-to-service connectivity is implemented by the service mesh.

Service mesh and ingress controller policies

In an enterprise-class application environment, typically, we have policies defined in both layers. Generally speaking, the ingress controller should be responsible for coarse-grained policies controlling the application consumption as the service mesh controls the fine-grained policies related to the microservices.

A good example could be the authentication and authorization processes. While the authentication processes tend to be implemented in a centralized environment, typically by the ingress controller, microservices authorization processes are inherently distributed. Typically, there are two levels of abstraction for access control policies:

  • Low granularity: This is focused on generic security policies. For example, access time, service required, IP address of the request, etc. The ingress controller layer also handles this authorization level.
  • High granularity: This is access control performed by microservices in relation to their specific resources. For example: operation within a service (read or write). The service mesh typically implements this authorization level.

In summary, we, as business and technical architects, should be able to define multiple policies in both layers.

Red Hat OpenShift Container Platform

Red Hat OCP is available on-prem bare metal or virtualized, Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Cloud, Microsoft Azure, Nutanix, Red Hat OpenStack Platform, VMware Cloud (VMC) on AWS. It’s also available as a managed service on major public cloud providers.

As an example, here's the OCP console after installing Red Hat OpenShift Service on AWS (ROSA):

You can check the official Red Hat OCP documentation to get your Cluster running on-prem, on AWS (ROSA), or on any other platform. Also, check out the blog post A Guide to Enterprise Kubernetes with OpenShift for an introduction to OpenShift including a diagram with its main components:

Red Hat OpenShift Service Mesh

Istio, as one of the main service mesh implementations available today, provides several features to control how microservices and other application components talk to each other:

  • Traffic management: This is the routing and rules configuration to control the flow of traffic between services.
  • Security: This is the underlying microservice communication channel that manages authentication, authorization, and encryption of service communication.
  • Observability: Istio supports the three main pillars of observability, metrics, tracing, and log processing, generating telemetry types for each one of them.

Architecture

The Istio Service Mesh architecture defines two main layers:

  • Data plane: Running as a sidecar for Kubernetes Pod, the data plane is implemented as transparent proxies, intercepting all network traffic between microservices. They also collect and report telemetry to the control plane, the second layer of the architecture.
  • Control plane: The control plane is used by platform administrators to define policies and publish them to the Data Planes.

Here's a diagram describing the architecture. Please check out the documentation to learn more about the other Istio components.

Kiali

Kiali is the official Istio Service Mesh console. Through the integration with Grafana, Prometheus, and Jaeger, Kiali provides capabilities to configure and monitor the service mesh. Here's a screenshot of Kiali's landing page:

Service mesh policies

As stated before the service mesh is responsible for defining and enforcing policies to control the microservice-to-microservice communication. As an example, let's consider the Istio Bookinfo Application. It’s a basic online book catalog application where a "Product" microservice sends requests to the "Reviews" and the "Details" microservices to build a page and respond to its consumers. This diagram presents the architecture:

Istio provides Kubernetes CRDs for policy definition. The following Istio DestinationRule and VirtualService declarations define a request routing policy to be applied to the three versions of the "Reviews" microservice.

As we consume the application, Kiali, the service mesh monitor component, starts building and refreshing some Graphs showing the service mesh microservices in action. Note that, as specified in the VirtualService policy, only versions v1 and v2 of the "Reviews" microservice have received requests.

Check the Istio documentation to learn more about the extensive list of policies you can apply and get your service mesh enforcing.

Kong Konnect and KIC

The picture above includes a Kong layer, deployed in a different Kubernetes namespace with two Pods. This is the Kong Ingress Controller (KIC) for Kubernetes deployment, which exposes the Bookinfo application.

Kong Ingress Controller for Kubernetes is an ingress controller for the Kong Gateway. It allows you to configure and run Kong Gateway using Ingress or Gateway API resources created inside a Kubernetes cluster.

Beyond proxying the traffic coming into a Kubernetes cluster, KIC also lets you configure plugins, load balancing, health checking, and leverage all that Kong Gateway offers in a standalone installation.

Kong Konnect

Kong Konnect is an API lifecycle management platform that is delivered as a service. The management plane is hosted in the cloud by Kong, while the runtime environments are deployed in your AWS accounts.

By associating your KIC deployment with Kong Konnect, this read-only association allows you to view the runtime entities, such as routes and applications, from your Kubernetes resources in Kong Konnect.

Considering the Kiali diagram again, KIC's controller is sending requests to an external component. This component is Kong Konnect, which receives all operational data from KIC.

The following picture shows a KIC-based Kong Konnect control plane:

Architecture

Let's deep dive into the KIC deployment shown in the Kiali diagram. KIC is made up of two high-level components:

  • Controller: This synchronizes the configuration from Kubernetes to Kong Gateway.
  • Kong Gateway: This is the core proxy that handles all the traffic.

Kong Ingress Controller configures Kong Gateway using ingress or Gateway API resources created inside a Kubernetes cluster.

The components are installed as two distinct but connected Kubernetes deployments implementing a topology called Gateway Discovery, where the controller uses Kubernetes service discovery to discover the Kong Gateway Pods.

Gateway API

Kong Ingress Controller fully supports the Kubernetes Gateway API spec to configure networking in Kubernetes. The Gateway API project is the successor to the Ingress API, supporting additional types of routes such as TCP, UDP, and TLS in addition to HTTP/HTTPS.

The Gateway API spec defines two main resources:

  • GatewayClass represents the class of gateway — in our case, KIC's Controller
  • Gateway represents the KIC's Kong Proxy instance that handles traffic of Gateway API routes.

Every gateway refers to a GatewayClass. Here's an example of a GatewayClass declaration. KIC reconciles any resources attached to a GatewayClass that has a spec.controllerName of "konghq.com/kic-gateway-controller".

GatewayClass

And here's a Gateway declaration:

Gateway

The Gateway refers to the GatewayClass declared previously and defines a listener to port 80 for the HTTP protocol.

HTTPRoute

With the gateway in place, we can expose the applications using the HTTPRoute declaration. The parentRefs setting refers to the gateway declared previously:

And now we are ready to consume it. Notice we are hitting the KIC's Kong Gateway component through the load balancer deployed for it.

Kong Konnect control plane

At the same time, the Kong Konnect Control Plane should show not just the Kong Objects created based on the HTTPRoute declaration but also the Analytics data KIC's gateway reported to it.

Kong Ingress Controller policies

As discussed before, the Ingress Controller is responsible for coarse-grained policies that control Service Mesh consumption requests. One of the most powerful capabilities provided by Kong Ingress Controller is the extensive list of available plugins for implementing such policies.

The plugins, totally supported by KIC, are classified into the following functionalities:

  • AI: A new plugin collection implementing AI-related use cases such as Prompt Decorator and Guard, AI Response Transformer, and more 
  • Authentication: OIDC, mTLS, API Key, LDAP, SAML, etc.
  • Security: Bot Detection, Open Policy Agent for Authorization policies, IP Restriction, etc.
  • Traffic Control: GraphQL Caching and Rate Limiting, Proxy Caching, Request Validator and Size Limiting, WebSocket support, Route by Header, etc. 
  • Serverless
  • Analytics & Monitoring: OpenTelemetry, Prometheus, Zipkin, etc.
  • Transformations: Request and Response Transformer, REST to gRPC, Kafka integration, etc.
  • Logging

KIC can be extended with custom plugins to implement new functionalities and supports WebAssembly to extend the Kong Gateway Proxy.

In fact, KIC, with its rich and extensive list of ready-to-use plugins and WebAssembly support for new custom plugins provided, offers a powerful solution for advanced ingress controller use cases. Moreover, with this approach, KIC totally replaces the default ingress gateway component provided by Istio.

As an example, here is a KongPlugin declaration, available by KIC, to define a rate-limiting policy.

This second policy defines, with the OIDC plugin, the OAuth authorization code grant-based authentication policy. The identity provider, as depicted in our reference architecture, is Keycloak.

We submit the policy adding annotations to the HTTPRoute previously created.

After applying the plugins, if you try to consume the application you'll get redirected to Keycloak to present your credentials (user/password pair).

Conclusion

Red Hat OpenShift, Istio Service Mesh, and Kong Ingress Controller provide extensive and advanced capabilities for implementing an enterprise-class service mesh. This document is intended to provide an introductory perspective on a service mesh application deployment running on a Red Hat OpenShift Cluster.

Please refer to the Red Hat and Kong documentation portals to learn about many other topics. For example:

Red Hat OpenShift, Istio, and Kong Ingress Controller can simplify service mesh implementation and management, improving security for all services infrastructure. You can get started with Kong Konnect for free!