# You Secured Your APIs. Then You Added AI.
Join us for a fireside chat with Harness
Your organization has added GenAI. Your developers are hitting LLM APIs. Your agents are talking to tools over MCP. Your systems are communicating agent-to-agent over A2A. Events are triggering autonomous workflows nobody fully documented. And somewhere in all of that traffic — sensitive data is moving in ways nobody fully mapped out.
In this fireside chat, Kong's Dan Temkin and Harness’ Adam Arellano get into the specific, technical risks that emerge across the full AI connectivity stack: prompt injection via MCP tool calls, PII leaking through LLM request payloads, trust assumptions that break down in A2A communication, event streams that bypass every security control, and down to the classic synchronous APIs still powering it all.
No product slides. No sales pitch. Just a candid conversation about what the threat landscape actually looks like and what a realistic governance architecture can do about it.

