Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Enterprise
  4. Faster Microservice-to-Microservice encrypted communication with Kong Mesh and Intel
Enterprise
November 22, 2021
6 min read

Faster Microservice-to-Microservice encrypted communication with Kong Mesh and Intel

Claudio Acquaviva
Principal Architect, Kong
Topics
Service MeshMicroservices
Share on Social

More on this topic

eBooks

Maturity Model for API Management

eBooks

Federated API Management: Accelerating Innovation with Autonomy and Oversight

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo

Service Mesh is an infrastructure layer that has become a common architectural pattern for intra-service transparent communication.

By combining Kubernetes a container orchestration framework, you can form a powerful platform for your microservices cluster, addressing the typical technical requirements that occur in highly distributed environments.

A service mesh is implemented through a sidecar configuration, or proxy instance, for each service instance. The proxy controls interservice communications, security, and monitoring.

On the other hand, Zero-Trust Security becomes even more critical as you transition to a microservice architecture. To implement zero-trust, you can use a mutual TLS to provide the encryption to every request your services are making.

Microservices and Zero-Trust Security can be a chance to make your systems more secure than they were with monolithic applications. On the other hand, the encrypted tunnel implemented with mTLS increases the microservice-to-microservice latency time.

This post will explore how to apply both Kong and Intel technologies to have the best of both worlds: encrypted tunnels with faster microservice communication response times in your Service Mesh deployment.

The Business Problem

Microservice implementation is based on a very dynamic environment; in time, most organizations will have the burden of managing multiple instances of a Microservice. This situation could arise due to:

  1. Throughput: depending on the incoming requests, there might be a higher or lower number of instances of a Microservice.
  2. Canary Release.
  3. Blue/Green Deployment.

In short, the Microservice-to-Microservice communication has specific requirements and issues to solve.

microservices1

There are several technical challenges in this example. One of the main responsibilities of Microservice #1 is to balance the load among all Microservice #2 instances; as such, Microservice #1 has to implement Service Discovery and Load Balancing.

Additionally, Microservice #2 has to implement some Service Registration capability to tell Microservice #1 when a new instance is available.

Additionally, Microservice policies should be developed for the following:

  • Service Tracing
  • Service Logging
  • Service ACL (Access Control List), and more

microservices2

Beware of these potential problem areas:

  • When microservices are implementing a considerable amount of code not related to the business logic they were originally defined to control.
  • Multiple microservices are implementing similar capabilities in a non-standardized process.

A Microservices development team should only focus on implementing business logic; ultimately, they should not be concerned about specific technical needs related to the Microservice distributed environment. Moreover, building non-functional capabilities into applications lacks a centralized governance point of control.

Secure Data Transfer and Zero-Trust Security

Besides all policies described before, a particular one plays an important role: to make the microservice-to-microservice communication established over a secure and encrypted tunnel.

The best way to achieve this is implementing a Zero-Trust Security environment where the communication tunnel provides natively data authenticity, integrity and privacy.

In fact, a Zero-Trust Security environment provides a scalable way to implement security while managing the microservice-to-microservice connections ensuring authentication, authorization and encryption:

  • Authentication to identify the microservice.
  • Authorization to control the communication between the microservices.
  • Encryption to prevent third parties from viewing the data in transit.

Check the “The Importance of Zero-Trust Security When Making the Microservices Move” ebook to learn about it.

Service Mesh Architecture Pattern as the Solution

The purpose of the Service Mesh Architecture Pattern is to extract standard non-functional capabilities of Microservice and apply them as an external component.

A Service Mesh Pattern is defined by two layers, a control plane and a data plane.

  • control plane: responsible for managing the policies that will drive the Microservice-to-Microservice communication.
  • data plane: responsible for implementing and enforcing the policies described by the control plane. It’s the external component where all the non-functional capabilities we’ve described are implemented. The data plane is composed of proxies deployed as sidecars.

The diagram below shows the relationship between the two layers. Flow #1 shows the admin team going to the control plane to define policies. All the policies are published in the existing sidecars.

servicemesh

Flow #2 shows the sidecars applying the policies previously published to the Microservices communication traffic and reporting to the control plane about their current status. In this sense, for instance, it’s worth noting two very important characteristics of the Sidecars.

  1. Each one of the Microservices has a respective sidecar taking care of all incoming and outgoing traffic.
  2. It is typically implemented as a transparent proxy; whereby, the Microservice doesn’t know the sidecar is working very close to it. In doing so, the sidecar is intercepting the traffic and applying the policies previously defined and published by the control plane.

The picture below describes the new Microservice communication and policy enforcement scenario:

mtls

From the Secure Data Transfer perspective, again, the sidecars are responsible for implementing the Zero-Trust Security environment ensuring that the communication between your services is encrypted using mTLS (mutual TLS) protocol.

As the mTLS tunnel brings a much more secure environment to our Microservices and Service Mesh, it increases the latency times. That would be a great opportunity for applying specific technologies in order to get optimal results even with encryption in place.

The following “Service Mesh and the Natural Evolution of Microservices” ebook presents the main ideas of the Service Mesh Architecture Pattern and the drivers to implement it in your Microservices project.

Enter Kong Mesh

Kong Mesh is an enterprise-grade service mesh for multi-cloud and multi-cluster on both Kubernetes and VMs. Totally based on the open source projects Kuma and Envoy proxy, ensures fast, reliable, and secure communication among application infrastructure services as it provides and manages critical capabilities, such as load balancing, discovery, observability, and access control.

Kong Mesh is an “Universal Service Mesh” designed for hybrid deployments with both Kubernetes and VMs. Moreover, its Control Plane can manage multiple and independent Meshes at the same time.

kongmesh

Intel Encryption Technologies

The 3rd Gen Intel® Xeon® Scalable processor (codename Ice Lake) introduced new Advanced Vector Extensions 512 (Intel® AVX-512) instructions to optimize cryptography computation. Combined with Intel's open source software libraries, such as IPP Cryptography Library, Multi-Buffer Crypto for IPsec Library (intel-ipsec-mb), Intel® QuickAssist Technology (Intel® QAT) and OpenSSL engine. The solutions improve crypto operations', such as TLS connection handshake performance substantially.

These new components provide batch processing of multiple TLS private key operations in parallel. With the asynchronous private key processing mechanism available both in OpenSSL and BoringSSL, the application software can submit handshake private key requests without having to wait one to return before another one can be submitted. In turn, a callback is called for each request once ready. Underneath, the multi-buffer crypto processing can take 8 such asynchronously submitted private key operations and process them in parallel using the AVX512 SIMD (single instruction multiple data) instructions , greatly improving the overall application performance.

Intel is contributing the accelerated TLS handshake code to the Envoy proxy upstream project. Thus, all the service mesh implementations using Envoy, such as Kuma, can leverage the enhanced performance directly by using the upcoming Envoy releases.

The diagram below compares the regular TLS handshake with the new asynchronous and enhanced one implemented with the new AVX-512 technologies.

icelake

Conclusion – The best of both worlds – Service Mesh with fast encrypted communication

A Kong Mesh and Intel Crypto NI based Service Mesh implementation provides a solid, scalable and hybrid infrastructure for your Microservice project without the burden of slow connection times we usually face with encrypted tunnels.

In the next posts, we will show performance results comparing a Kong Mesh implementation with and without Intel’s encryption acceleration technologies

Feel free to check additional capabilities provided by Kong to implement high-performance Service Meshes as well as Intel’s new Ice Lake processor related technologies.

Topics
Service MeshMicroservices
Share on Social
Claudio Acquaviva
Principal Architect, Kong

Recommended posts

You Might Be Doing API-First Wrong, New Analyst Research Suggests

Kong Logo
EnterpriseSeptember 3, 2025

Ever feel like you're fighting an uphill battle with your API strategy? You're building APIs faster than ever, but somehow everything feels harder. Wasn’t  API-first  supposed to make all this easier?  Well, you're not alone. And now industry analys

Heather Halenbeck

Ultimate Guide: What are Microservices?

Kong Logo
Learning CenterAugust 1, 2025

Ever wonder how Netflix streams to millions of users without crashing? Or how Amazon powers billions of transactions daily? The secret sauce behind these scalable, resilient behemoths is microservices architecture. If you're a developer or architect

Kong

72% Say Enterprise GenAI Spending Going Up in 2025, Study Finds

Kong Logo
EnterpriseJune 18, 2025

Survey Says: Google LLMs See Usage Surge, Most OK with DeepSeek in the Workplace Enterprise adoption of large language models (LLMs) is surging. According to Gartner , more than 80% of enterprises will have deployed generative AI (GenAI) applicatio

Eric Pulsifer

5 Steps to Immediately Reduce Kafka Cost and Complexity

Kong Logo
EnterpriseJune 24, 2025

Kafka delivers massive value for real-time businesses — but that value comes at a cost. As usage grows, so does complexity: more clusters, more topics, more partitions, more ACLs, more custom tooling. But it doesn’t have to be that way. If your tea

Umair Waheed

Kong Mesh 2.11: Reduced Privileges, Improved Support for AWS ECS

Kong Logo
Product ReleasesJune 20, 2025

We’re at it again, bringing more incremental improvements to Kong Mesh!  Built on top of Kuma, Kong Mesh brings much-needed simplicity and production-grade tooling. Kong Mesh is built for smooth operations with platform teams in mind, providing secu

Justin Davies

Is Ambient Mesh the Future of Service Mesh?

Kong Logo
EnterpriseJune 30, 2025

A Practical Look at When (and When Not) to Use Ambient Mesh The word on the street is that ambient mesh is the obvious evolution of service mesh technology — leaner, simpler, and less resource-intensive. But while ambient mesh is an exciting develop

Umair Waheed

How to Create a Platform Cross-Charging Model (and Why Not To Do It)

Kong Logo
EnterpriseMay 2, 2025

I'm commonly asked by customers for advice on how they can build a good platform cross-charging model for their organization. And my gut reaction is nearly always "don't." We'll come back to why I think that later, but first let's look at what cross

Steve Roberts

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

Platform
Kong KonnectKong GatewayKong AI GatewayKong InsomniaDeveloper PortalGateway ManagerCloud GatewayGet a Demo
Explore More
Open Banking API SolutionsAPI Governance SolutionsIstio API Gateway IntegrationKubernetes API ManagementAPI Gateway: Build vs BuyKong vs PostmanKong vs MuleSoftKong vs Apigee
Documentation
Kong Konnect DocsKong Gateway DocsKong Mesh DocsKong AI GatewayKong Insomnia DocsKong Plugin Hub
Open Source
Kong GatewayKumaInsomniaKong Community
Company
About KongCustomersCareersPressEventsContactPricing
  • Terms•
  • Privacy•
  • Trust and Compliance•
  • © Kong Inc. 2025