Video

Kong AI Gateway: Advanced Semantic Caching, Routing, and Firewall for LLM

Revisit one of our sessions from API Summit 2024!

To reach a more desirable and precise response, it's highly beneficial to provide context to the LLM you're working with. Furthermore, rapid responses can be made possible by caching existing contextual queries. In this session, we'll explore the advanced Semantic Caching, Routing, and Firewall capabilities of the Kong AI Gateway.