Kong does not rely on LiteLLM — whether PyPI-distributed or otherwise — for any components in our runtime stack.
As you may know, a supply chain vulnerability affecting LiteLLM version 1.82.8, a popular open-source AI proxy library, was publicized yesterday. The malicious package, distributed via PyPI, executed a credential-stealing script capable of exfiltrating environment variables, cloud credentials, SSH keys, and other secrets from any environment where it was installed.
Kong did not incorporate or use the LiteLLM library in its products.
If your organization uses LiteLLM independently — in development environments, CI/CD pipelines, or alongside other tooling — we'd encourage you to review the original GitHub disclosure and treat any environment that ran pip install litellm==1.82.8 as potentially compromised.
Also, per the coverage by Comet, it is worthwhile to note that several other popular projects and agent frameworks rely on LiteLLM, including CrewAI, Browser-use, Opik, Mem0, DSPy, Agno, Guardrails, and Camel-AI. According to Comet, “Anyone who ran pip install or pip install --upgrade on any of these packages during the approximately 4-hour exposure window (roughly 09:00–13:30 UTC on March 24) could have pulled the compromised litellm as a transitive dependency.” We recommend referencing this blog if you want more information.
If you have questions about Kong's security posture or would like to talk through your AI infrastructure architecture, reach out to your Kong account team or contact security@konghq.com.
—The Kong Security Team