In Kong Mesh 1.2, we added a number of new features to help enterprises accelerate their service mesh adoption. One of the major new features was native Open Policy Agent (OPA) support within the product.
In the demo image above, you can see a number actions taking place across a simple web application. These “actions” ultimately are various GET, POST, and DELETE methods (API calls) across various tiers of our microservice application. I’ve used Kong Mesh’s native OPA support to secure communication between these calls. In this post we’re going to take you through the the process of how this is configured and consumed.
What is Open Policy Agent?
Open Policy Agent, or OPA, is a policy engine that allows developer and operators to apply rules to common ways applications interact within an environment. Kong Mesh’s integration enables a common way to drive these policies between all service mesh workloads (Kubernetes, Virtual Machine, container workload). A simple example of an OPA policy could be using an OPA to control if a pod is allowed to be scheduled via the Kubernetes admission controller based on specific characteristics of the cluster or the submitting user. A more complex example could be applying a policy that controls filters DNS queries and responses against CoreDNS. OPA is highly extensible, and comes with a huge set of integrations out of the box across a growing number of tools within the application landscape. It allows users to get very granular at how they apply their authentication and authorization policies against workloads by creating a policy driven approach to interactions within application platforms.
For environments that require greater governance or management – Styra provides an enterprise offering for OPA. We’ll be diving more into how Styra provides a centralized management option for OPA at a later date – but a key capability to mention is that we also support integrating directly with Styra. Policies can either be applied directly to Kong Mesh (standalone), or Kong Mesh can reach out to a Styra server to pull down the applicable policies.
Let’s jump in on how to get started using it with a simple web application.
Our Service Mesh Environment
I’m running Kong Mesh in an Amazon Elastic Kubernetes Service (EKS) environment for this walk-through. We’re running a single zone deployment (all in the same cluster). The application we’re working with an application I threw together for various learning scenarios. It’s a 4-tier application with a Frontend, an API tier, a Redis database, and a PostgreSQL database.
We’re exposing our frontend through Kong’s Ingress Controller (KIC). The Kong Ingress Controller provides inbound access to workloads living within our service mesh. In this case, it’s handing traffic off to our React based Frontend service. The Frontend service initiates a series of GET, POST, and DELETE requests against the Python based API tier. POSTS are ran through a Redis queue service (Celery) to properly batch the requests. DELETEs and GETs are run directly from the API to the PostgreSQL database service.
From a “zero-trust security” standpoint, with Kong Mesh enabled, we get mTLS out of the box as well as TrafficPermissions (service mesh policies that control communication between services). These traffic policies allow or disallow communication between Data Plane objects in Kong Mesh. Data Plane objects are effectively the sidecars that live alongside the deployed resources. These traffic policies really only understand “allow” vs “deny”. Enter Open Policy Agent.
Getting Started with OPA Policies
Using OPA’s Envoy filters, we can start to apply policies to the interactions between our Frontend service and the API tier at a more “context” sensitive level. We can look at the content of the actual request, and make decisions based on that content. Let’s create a simple policy and dissect it.
-inlineString: | # one of: inlineString,secret
There’s a few things happening in this policy that we should talk through:
We’re creating a policy named “opa-1” using Kong Mesh’s new OPAPolicy CRD. This allows us to store these policies alongside your application code.
We’ve applied this policy to all services in the environment. We could filter this down so that it only applied to our frontend if needed.
We’re adding a configuration for the agent that outputs the decision log to the pod’s console.
We’re importing a package to help us evaluate the request. In this case we’re bringing in input.attributes.request.http which lets us look at details of the http request.
We’re adding a policy inline. This policy could be added as a secret, especially since many policies might include secrets for JWT’s for example, that you would want to secure.
We’re creating a policy that sets the traffic to “allow”, based on an evaluation of the “action_allowed” policy that will return true if the request coming through is a “GET”.
When we apply this manifest to the workload cluster, the policy is created within our Kong Mesh environment. From here, the OPA agent that is running alongside our application (within the Kong Mesh process) is refreshed with the new configuration. This gives a great experience since pods do not have to be restarted in order to start working with the OPA policy. Once applied, we can view the policy directly within the Kong Mesh UI under the policies section. As we iterate on our policy, this UI can be refreshed to show changes – and the new policy will apply to our workload seamlessly.
The policy we applied is admittedly very basic. Ultimately all we are telling the application is that it is allowed to respond to GET requests. This means that anything involving a POST, or a DELETE (or any other method for that matter) is going to fail. When looking at the logs, you’ll see a 403 forbidden error code displayed for anything other than a GET.
Since we’ve applied a default action of denying access; anything not handled is forbidden. Let’s make some modifications to our OPA policy to allow communication.
Enhancing the OPA Policy with JWT Support
The application I’m using supports issuing a JSON Web Token (JWT) to help with authorization across the application. In order to receive a valid JWT we need to issue a POST request to the loginEndpoint API. Once we’ve enabled the ability to POST – we’ll also want to configure OPA to be able to decode our issued JWT token to provide us the authorization details for the request. We’ll inspect the decoded JWT token to determine if the user who submitted the login is authorized to make POST request.
Let’s update our OPA policy to support this. We’ll update our OPA policy to be the following:
Our policy has been modified quite a bit! First, we’ve added a couple helper functions that support parsing out the token. Note that you’ll see a secret I’ve published within this configuration file. In production scenarios, you would absolutely want to protect this value by storing it in a secret. This value allows the decryption of the JWT token into plaintext (which is subsequently what the 2 functions that have been added do).
I’ve also added a new set of policy stanzas that allow login requests to the appropriate path (“/api/loginEndpoint”) via a POST method. We determine this by looking at the request attributes, and specifically the http.path value. When we apply this policy, the OPA agent will refresh once again and the policy will be applied to the running workloads.
When I select the “Login” button within the application and fill in my credentials, you’ll note that I am able to POST successfully. If we review the console logs for the system we can see a JWT token has been issued. We can further validate this by checking the browsers local storage and confirming the token has been added.
Now that we have the ability to get a JWT, this opens up additional things we can do with the token. In the case of this application, we’re giving it an expiration timestamp as well as a userid value. These values are generated by our API tier during the login process. Let’s make a couple final changes to our policy to allow us to take advantage of this data.
For the sake of brevity, we’ll add the following stanzas to the bottom of our OPA policy and reapply the manifest.
We’ve added 2 more policies within this addition that enable POST and DELETE methods when the decoded JWT includes the user_id value of “cody” (which is me, hi!).
Once we apply this policy, our application will be fully functional allowing us to login, post and delete messages. This communication is secured across the tiers using JWT tokens that are distributed from within our application – but decoded and evaluated by OPA living right alongside our service mesh workloads.
This walk-through has taken us through a basic use case of leveraging OPA policies to enable additional authorization and authentication policies across a multi-tier application. This capability allows us to intelligently control the way the application communicates versus the typical zero-trust approach of allowing or disallowing communication broadly. Envoy has many more filters that can be used to evaluate communication between tiers – but theres an ever growing amount of policies that extend beyond envoy communication that can be explored.
This feature was just added in Kong Mesh 1.2 and we are already busy at planning ways to continue to extend it to new areas within service mesh. Stay tuned for more!
Kong Mesh 1.2 Is Here With Embedded OPA Support, FIPS 140-2 Compliance and Multi-Zone Authentication
Security Game Plan for Your Microservices Applications
Security Challenges in the Microservices Architecture The demand for digital transformation has accelerated, with 62% of technology leaders sharing that they fear they are at risk of being displaced by…
Containerization in a Cloud Native World: An Interview With Reza Shafii
Multi-cloud infrastructure is changing the way companies approach their software architecture. What started solely as gateway traffic management has evolved into full lifecycle API management. I recently sat down with…