More than 80% of developers and business leaders say AI investments have already created the opportunity for new products or services, according to Kong’s 2024 API Impact Report. Clearly, AI has proven its value and place in the enterprise, but with new innovations come new potential vulnerabilities.
But as organizations push forward into and navigate the rising risk of AI-enhanced threats and the adoption of AI tools and large language models (LLMs), what have tech leaders experienced? And what are they most concerned about in the year ahead?
In API Security Perspectives 2025: AI-Enhanced Threats and API Security, we surveyed 700 IT leaders about API security and the rising risk of AI-enhanced threats — and how prepared they may or may not be.

Nearly 75% of respondents express serious concern about AI-enhanced attacks, but a notable disconnect emerged. While 55% of organizations experienced an API security incident in the past year (and one-third call the incident "severe"), 85% say they’re confident in their organization’s security capabilities. This confidence may be misplaced, given that 77% acknowledge the potential for significant security risks from AI and LLM integration into their API ecosystem.
These API security incidents also can come with substantial costs: 47% of those who experienced an incident in the past 12 months reported remediation costs of more than $100,000 — and 20% said costs exceeded $500,000.
The gap between perception and reality requires attention, particularly as API attacks are projected to grow by 548% by 2030. Moreover, API breaches lead to more leaked data than the average security breach, Gartner reports.