We introduced the concept of Secrets Management in the Kong Gateway 2.8 release, and we’re happy to share that as of the recent [Kong Gateway 3.0 release](https://docs.konghq.com/gateway/latest/)Kong Gateway 3.0 release we’re giving it the Kong seal of approval! That means that you can rely on Secrets Management in production to manage all of your sensitive information.
Kong Gateway relies on lots of secrets to operate — everything from your database passwords to API keys used in plugins. You’ve previously been able to use Role Based Access Control (RBAC) to limit access to sensitive information in the admin API and Kong Manager, but it’s an “all or nothing” approach. Contributors can manage plugin configuration, or they can’t. Wouldn’t it be great if they could manage the configuration without seeing any secret values?
This is what Secrets Management enables.
Which Vaults are supported?
With this announcement, we officially support the following data sources for secrets:
Environment variables (OSS)
HashiCorp Vault (Enterprise)
AWS Secrets Manager (Enterprise)
Google Cloud Secrets Engine (Enterprise, Beta)
Kong abstracts each of the above systems into a set of nested keys. The only thing that changes is the vault identifier (hcv, aws or env) For example, to access the password field of the Postgres secret in the HashiCorp vault, you would use the following reference:
{vault://hcv/postgres/password}
The same secret stored in AWS Secrets Manager would look almost identical:
{vault://aws/postgres/password}
Finally, let’s take a look at what this would look like using the env vault:
As you can see, Kong supports setting a JSON payload to provide nesting whilst using environment variables.
Understanding “Referenceable”
In order to keep Kong Gateway performant, we’ve limited the fields that accept using vault references to refer to secrets. To help you understand where you can use values from a vault, we’ve tagged any fields that support secrets with “referenceable" in our plugin documentation.
This means that you can set a value of {vault://hcv/redis/password} and it will be resolved as expected.
Securing Redis with Secrets Management
We’ve done a lot of talking about Secrets Management, but what really made it click for me was to see an example. Let’s take a look at how to store our Redis password in HashiCorp Vault when using the Proxy Cache Advanced plugin.
Running Redis
As we’ll be using Redis, let’s start by running a server locally. I already have Redis installed, and start the server using a configuration file provided as stdin to set a server password:
echo'requirepass demo' | redis-server -
Running HashiCorp Vault
Next, I need a Vault server running to store our secret. To keep things simple, I’m running the server on my local machine with vault server -dev which starts Vault, creates a new kv store named secret and returns the root key for authentication (which looks like hvs.x4abajxI7TWduo0GQMnd5N8Q in my case).
Once we have a vault, we need to store some data in there. Create a redis secret by running the following:
exportVAULT_ADDR="http://localhost:8200"vault kv put -mount secret redis password=demo
At this point, we have everything we need to test out secret management!
Using Proxy Caching Advanced
We’ll be using the Proxy Caching Advanced plugin to test our vault configuration. To enable the plugin, we first need to create a service and a route. Let’s proxy our test requests to mockbin.org:
We’ll also need to configure the proxy-cache-advanced plugin. I’m using the default values from the documentation for most fields, but take a look at config.redis.password. This is where we reference the value from our vault:
Keep an eye on your Kong Gateway logs at this point, as they’ll contain an error if your vault isn’t responding correctly. Here’s an error I received after setting the wrong HCV_TOKEN:
unable to resolve reference {vault://hcv/redis/password}
Finally, it’s time to make a request to our route. The first time you make a request the response will come from mockbin.org and the X-Cache-Status header in the response will be Miss.
curl -i localhost:8000/mock/request/hello
If you make the same request again, the X-Cache-Status header will return Hit. You can check that the cache is being populated by checking the keys in Redis too:
echo"KEYS *"| redis-cli -a demo
Conclusion
Congratulations! You just learned how Secret Management works in Kong. Sensitive information is sensitive for a reason and using Kong’s vault functionality you can keep the values away from prying eyes.
The environment, HashiCorp Vault, and AWS Secrets Manager drivers are production-ready today, but there’s one more thing I wanted to share with you. We’re also announcing support for Google Cloud Secrets Engine, so if you’re a GCP user don’t worry - we’ve got you covered.
I’m excited about this release, and I hope you are too. If you’ve got any questions, you can find me on Twitter at [@mheap](https://twitter.com/mheap/)@mheap or on the Kong Community Slack.
We’re excited to announce the general availability of Kong Insomnia 11! This release introduces third-party vault integrations for enhanced security, an all-new Git sync experience for more seamless collaboration, and support for multi-tabs to impro
Hayden Lam also contributed to this post. Today we’re thrilled to announce new features in Kong Konnect , including secrets management, support for Kong Gateway 3.1, Analytics updates, runtime group APIs, system accounts, and an intuitive overview
When an organization deploys AI agents at scale, high uptime and low latency are an important baseline. However, Platform owners and business stakeholders could be flying blind on several fronts: The Insights Gap: Non-technical stakeholders have li
Managing gateway configurations at scale is harder than it looks. When a plugin needs to apply to most routes, but not all, teams could either duplicate configuration across routes and violate DRY (“Don’t Repeat Yourself”) principles, or write custo
Agent-to-agent communication is the next frontier of AI infrastructure. As teams decompose monolithic AI workflows into specialized agents — a research agent, a booking agent, a summarization agent — the calls between those agents become as importa
Kong Agent Gateway Is Here — And It Completes the AI Data Path
Kong Agent Gateway is a new capability within Kong AI Gateway that extends our platform to more robustly cover agent-to-agent (A2A) communication. With this release, Kong AI Gateway n
The widespread adoption of Kafka and event streaming platforms is evident across several enterprises, where they serve as the backbone of critical operations, ranging from financial transactions to AI inference pipelines. However, in the domains of