Blog
  • AI Gateway
  • AI Security
  • AIOps
  • API Security
  • API Gateway
|
    • API Management
    • API Development
    • API Design
    • Automation
    • Service Mesh
    • Insomnia
    • View All Blogs
  1. Home
  2. Blog
  3. Engineering
  4. Model Context Protocol (MCP) Security: How to Restrict Tool Access Using AI Gateways
Engineering
February 3, 2026
9 min read

Model Context Protocol (MCP) Security: How to Restrict Tool Access Using AI Gateways

Deepak Grewal
Staff Solutions Engineer

For too long, the Model Context Protocol (MCP) has operated on a principle of open access: connect an AI agent to an MCP server, and it gets access to every single tool that server offers. While this approach is simple for initial experimentation, it quickly becomes a liability in production. Exposing unneeded tools to an agent creates a significant security risk from over-permissioned agents and a severe performance hit known as "Context Rot" that degrades an LLM's ability to reliably select the right tool. 

This post breaks down why traditional prompt-injection defenses miss the more fundamental issue of tool governance, and introduces a robust, gateway-level solution for implementing tool-specific Access Control Lists (ACLs), ensuring your AI agents only see — and can only use — the capabilities they absolutely need.

TL;DR?

MCP servers expose all tools by default. There are two problems with this: security (agents get capabilities they shouldn't have) and performance (too many tools degrade LLM tool selection). The solution? Put a gateway between agents and MCP servers that filters tools based on who's asking. Default deny, role-based access, credential isolation.

Who this is for:

  • Platform engineers deploying AI agents to production
  • Anyone connecting agents to multiple MCP servers
  • Teams hitting context limits or seeing degraded tool selection
  • People who want gateway-level security patterns for MCP

Skip this if:

  • You're experimenting with MCP locally
  • Your agents use 1-2 MCP servers with <20 tools total
  • You're looking for prompt injection mitigations (not covered here)

Understanding MCP tool exposure

MCP servers expose tools by default — all of them. Connect an agent to an MCP server, and it gets access to every capability that server offers. No scoping, no filtering, no restrictions.

This is fine for experimentation. It's a problem in production.

GitHub's MCP server exposes 40+ tools. Jira, Confluence, Slack — each adds more. Connect an agent to three or four MCP servers, and you're easily looking at 100+ tools loaded into context before your agent does anything useful.

Most MCP security discussions focus on prompt injection and tool poisoning. Important threats, but they miss something more immediate: what happens when you hand an AI agent a toolbox it can't effectively use?

This post covers why restricting MCP tools matters for security and performance and how to implement tool-level access control at the gateway layer.

The dual problem: AI agent security and context window limits

Restricting tools solves two distinct problems:

Security: Over-permissioned agents

An AI agent with access to merge_pull_request can merge code. An agent with delete_repository can delete repositories. Most agents don't need these capabilities, but MCP servers expose everything by default.

This creates shadow tooling—capabilities your agents technically have but shouldn't use. Traditional API security solved this with scopes and permissions. MCP needs the same treatment.

Efficiency: Context rot

Every tool you expose to an agent consumes context. The tool name, description, parameter schema — it all goes into the prompt. Load 40 tools, and you've burned thousands of tokens before the agent does anything useful.

Worse, the agent's ability to select the right tool degrades as options increase.

Context rot: Context window optimization research

Context rot refers to performance degradation when LLMs process increasingly long inputs. As context grows, models don't degrade gracefully — they become unreliable. They hallucinate parameters, call the wrong tools, and miss instructions.

Anthropic's guidance indicates that tool selection accuracy degrades significantly beyond 30-50 tools. In practice, using Claude Opus 4.5 with its 200K context window, I've observed reliability beginning to decline around 60% context utilization.

Claude Code showing context allocation - System tools consume 8.4% of the context window before any conversation begins

References:

  • Chroma: Context Rot - How Increasing Input Tokens Impacts LLM Performance
  • Factory AI: The Context Window Problem
  • Arxiv: Solving Context Window Overflow in AI Agents

Model providers are responding. Anthropic's tool search feature addresses this directly:

  • Claude can dynamically discover and load tools on-demand
  • Tools aren't loaded into context upfront

This is the right direction. But it's Claude-specific. If your agents use multiple models, you need:

  • Tool governance that works across OpenAI, Gemini, and open-source models
  • Centralized control, not per-agent configuration

That's where gateway-level control comes in.

The solution: AI gateway tool access control

The pattern is straightforward: put an MCP gateway between your agents and MCP servers. The gateway intercepts tool lists and filters them based on who's asking.

About the examples below

The examples throughout this post use GitHub's MCP server as the backend, though the patterns apply to any MCP server.

The configurations use Kong AI Gateway with the ai-mcp-proxy plugin. Kong is configured declaratively using YAML files that define:

  • Services - backend MCP servers to connect to
  • Routes - URL paths that agents use to access services
  • Plugins - policies applied to requests (authentication, transformation, ACLs)
  • Consumer groups - categories of agents with different permissions

You can apply these patterns with other gateways, but the specific syntax will differ.

Kong consumer groups primer

Consumer groups are Kong's mechanism for categorizing API consumers. The key pattern:

  1. Identity first - Identify who's making the request (via JWT token validation)
  2. Map to group - Extract a claim from the token that specifies the consumer group
  3. Apply ACLs - Filter tools based on the consumer group's permissions

This means ACLs are attached to groups, not individual consumers. You define groups once, configure their tool access, then map any number of consumers to those groups via JWT claims.

Progressive security model

Kong's MCP gateway implements tool governance as a progressive security model with four layers:

Layer 1: Pass-through proxy

┌─────────┐      ┌─────────┐      ┌────────────┐
│  Agent  │ ───► │ Gateway │ ───► │ MCP Server │
└─────────┘      └─────────┘      └────────────┘
     │                                   │
     └── Agent provides GitHub token ────┘

Gateway proxies requests to MCP servers. Agents still provide their own credentials. You get centralized logging and analytics but no access control yet.

Layer 2: Gateway-managed credentials

┌─────────┐      ┌─────────┐      ┌────────────┐
│  Agent  │ ───► │ Gateway │ ───► │ MCP Server │
└─────────┘      └─────────┘      └────────────┘
     │                │                  │
     │                └── Injects token ─┘
     │                    from vault
     └── No credentials needed

Gateway injects backend credentials from a secrets vault. Agents never see the underlying tokens.

Layer 3: OAuth/OIDC authentication

┌─────────┐      ┌─────────┐      ┌────────────┐
│  Agent  │ ───► │ Gateway │ ───► │ MCP Server │
└─────────┘      └─────────┘      └────────────┘
     │                │                  │
     │                ├── Validates      │
     │                │   IDP token      │
     │                └── Injects GitHub ┘
     │                    token
     └── Presents IDP token (Entra, Okta, etc.)

Agents must present a valid token from your identity provider before accessing any MCP endpoint.

Layer 3b: Hybrid authentication (gateway auth with pass-through MCP credentials)

┌─────────┐      ┌─────────────────────┐      ┌────────────┐
│  Agent  │ ───► │       Gateway       │ ───► │ MCP Server │
└─────────┘      └─────────────────────┘      └────────────┘
     │                     │                        │
     ├── IDP token ────────┤                        │
     │   (validated)       │                        │
     │                     │                        │
     └── MCP credentials ──┼── Passed through ──────┘
         (GitHub PAT,         (gateway doesn't
          Jira key, etc.)      manage these)

Not every use case requires the gateway to manage MCP credentials. If your agents connect to multiple backend MCP servers—each with individual, user-specific access—you may want the gateway to handle authentication and governance while letting credentials pass through to the backend.

In this model:

  • The gateway validates the agent's identity via JWT
  • Tool-level ACLs still apply based on consumer group
  • The agent provides its own credentials for the specific MCP server
  • Credentials pass through to the backend unchanged

This is useful when users have varying access levels across different MCP servers, or when centralizing credential management isn't practical. The gateway still provides authentication, tool filtering, logging, and rate limiting—just not credential injection.

Layer 4: Tool-level ACLs

┌─────────┐      ┌─────────────────────┐      ┌────────────┐
│  Agent  │ ───► │       Gateway       │ ───► │ MCP Server │
└─────────┘      │                     │      └────────────┘
     │           │ 1. Validate token   │           │
     │           │ 2. Map to consumer  │           │
     │           │    group via claims │           │
     │           │ 3. Filter tools     │           │
     │           │    by ACL           │           │
     │           └─────────────────────┘           │
     │                                             │
     └── Sees only allowed tools ──────────────────┘
         (e.g., 2 tools instead of 40)

Based on JWT claims, the gateway maps agents to consumer groups and filters which tools each group can see.

Example JWT payload:

The github-mcp-access claim contains the consumer group name. When this token hits the gateway, the agent gets mapped to github-cicd-agents and sees only the tools allowed for that group.

Kong configuration:

The key insight: the tool list terminates at the gateway. The backend MCP server might expose 40 tools, but your security-scanner agent only sees 2.

Implementing MCP tool restrictions

The ai-mcp-proxy plugin supports multiple approaches to restricting tools:

Option 1: Consumer groups with JWT claim mapping

Your identity provider includes a claim (e.g., github-mcp-access) that specifies which consumer group the agent belongs to. The gateway maps this automatically.

Option 2: Static consumer assignment

Define consumers explicitly and assign them to groups. Useful when agents use API keys instead of JWT tokens. These keys are generated and managed by the gateway—essentially virtual credentials that never touch your backend systems.

Option 3: Route-based separation

Create separate routes for different agent types, each with its own tool configuration. Simpler to reason about, but requires agents to know which endpoint to use.

The common thread: default deny. Tools not explicitly listed in the tools array with an allow ACL are blocked.

Practical example: GitHub MCP with three agent personas

Here's a real scenario. You have GitHub's MCP server with 40+ tools. You want three types of AI agents to use it:

Note: Tool names are illustrative and may differ from the actual GitHub MCP server implementation.

Tool-Level ACLs - Gateway filtering 40 GitHub MCP tools down to 2-8 tools per agent type

Notice what's missing: merge_pull_request, push_files, delete_file. No AI agent gets these. Human-in-the-loop by design.

Full configuration:

When the security scanner connects and requests the tool list, it sees exactly two tools. The CI/CD agent sees seven. The code reviewer sees eight. The backend GitHub MCP server still has 40+, but agents only see what they need.

Credential isolation for AI agent security

There's a secondary benefit to gateway-managed credentials: agents never see your GitHub token.

Without gateway:
┌─────────┐                    ┌────────────┐
│  Agent  │ ─── GitHub PAT ──► │ MCP Server │
└─────────┘                    └────────────┘
     │
     └── Agent config contains the credential
         (compromised agent = exposed token)

With gateway:
┌─────────┐      ┌─────────┐      ┌────────────┐
│  Agent  │ ───► │ Gateway │ ───► │ MCP Server │
└─────────┘      └─────────┘      └────────────┘
     │                │
     │                └── GitHub PAT injected
     │                    from vault
     └── IDP token only (scoped to agent's permissions)

Even if an agent is compromised, the attacker gets an IDP token scoped to that agent's consumer group—not your GitHub PAT with repo admin access.

When to implement MCP tool restrictions

Don't wait until you have 40 tools. Start with these principles:

From day one:

  • Default deny on all tool ACLs
  • Gateway-managed credentials (agents don't hold backend secrets)
  • Logging enabled (you'll want the audit trail)
  • Gateway overhead for tool filtering is typically sub-10ms

As you scale:

  • Define consumer groups by agent function, not by team
  • Use JWT claims mapping if your agents already carry role information
  • Review tool access quarterly—permissions accumulate

Signs you waited too long:

  • Agents calling tools they shouldn't have access to
  • Context windows filling up before agents finish tasks
  • "We don't know what tools our agents are actually using"

The cost of adding restrictions later is higher than starting restrictive. Shadow tooling is easier to prevent than to clean up.


References

  • Chroma Research: Context Rot
  • Factory AI: The Context Window Problem
  • Arxiv: Solving Context Window Overflow in AI Agents
  • Anthropic: Tool Search Tool
  • Mirantis: Securing MCP for Enterprise Adoption
AI GatewayMCPAI SecurityAgentic AI

Table of Contents

  • TL;DR?
  • Understanding MCP tool exposure
  • The dual problem: AI agent security and context window limits
  • Context rot: Context window optimization research
  • The solution: AI gateway tool access control
  • Implementing MCP tool restrictions
  • Practical example: GitHub MCP with three agent personas
  • Credential isolation for AI agent security
  • When to implement MCP tool restrictions

More on this topic

Videos

MCP vs OpenAPI vs A2A vs ?: Preparing for the Agentic World

Videos

From APIs to AI Agents: Building Real AI Workflows with Kong

See Kong in action

Accelerate deployments, reduce vulnerabilities, and gain real-time visibility. 

Get a Demo
Topics
AI GatewayMCPAI SecurityAgentic AI
Share on Social
Deepak Grewal
Staff Solutions Engineer

Recommended posts

Introducing MCP Tool ACLs: Fine-Grained Authorization for AI Agent Tools

Product ReleasesJanuary 14, 2026

The evolution of AI agents and autonomous systems has created new challenges for enterprise organizations. While securing API endpoints is well-understood, controlling access to individual AI agent tools presents a unique authorization problem. Toda

Michael Field

Building Secure AI Agents with Kong's MCP Proxy and Volcano SDK

EngineeringJanuary 27, 2026

The example below shows how an AI agent can be built using Volcano SDK with minimal code, while still interacting with backend services in a controlled way. The agent is created by first configuring an LLM, then defining an MCP (Model Context Prot

Eugene Tan

AI Agent with Strands SDK, Kong AI/MCP Gateway & Amazon Bedrock

EngineeringJanuary 12, 2026

In one of our posts, Kong AI/MCP Gateway and Kong MCP Server technical breakdown, we described the new capabilities added to Kong AI Gateway to support MCP (Model Context Protocol). The post focused exclusively on consuming MCP server and MCP tool

Jason Matis

Kong MCP Registry: Connect AI Agents with the Right Tools

Product ReleasesFebruary 2, 2026

The Kong MCP Registry acts as a central directory for AI agents and clients to access services that provide context or take action. For AI agents, think of it as a combination of a "Service Catalog" and a "Developer Portal." It offers the metadata,

Jason Harmon

Agentic AI Governance: Managing Shadow AI and Risk for Competitive Advantage

EnterpriseJanuary 30, 2026

Why Risk Management Will Separate Agentic AI Winners from Agentic AI Casualties Let's be honest about what's happening inside most enterprises right now. Development teams are under intense pressure to ship AI features. The mandate from leadership

Alex Drag

What is a MCP Gateway? The Missing Piece for Enterprise AI Infrastructure

Learning CenterJanuary 21, 2026

AI agents are spreading across organizations rapidly. Each agent needs secure access to different Model Context Protocol (MCP) servers. Authentication becomes complex. Scaling creates bottlenecks. The dreaded "too many endpoints" problem emerges.

Kong

Building the Agentic AI Developer Platform: A 5-Pillar Framework

EnterpriseJanuary 15, 2026

The first pillar is enablement. Developers need tools that reduce friction when building AI-powered applications and agents. This means providing: Native MCP support for connecting agents to enterprise tools and data sources SDKs and frameworks op

Alex Drag

Ready to see Kong in action?

Get a personalized walkthrough of Kong's platform tailored to your architecture, use cases, and scale requirements.

Get a Demo
Powering the API world

Increase developer productivity, security, and performance at scale with the unified platform for API management, AI gateways, service mesh, and ingress controller.

Sign up for Kong newsletter

    • Platform
    • Kong Konnect
    • Kong Gateway
    • Kong AI Gateway
    • Kong Insomnia
    • Developer Portal
    • Gateway Manager
    • Cloud Gateway
    • Get a Demo
    • Explore More
    • Open Banking API Solutions
    • API Governance Solutions
    • Istio API Gateway Integration
    • Kubernetes API Management
    • API Gateway: Build vs Buy
    • Kong vs Postman
    • Kong vs MuleSoft
    • Kong vs Apigee
    • Documentation
    • Kong Konnect Docs
    • Kong Gateway Docs
    • Kong Mesh Docs
    • Kong AI Gateway
    • Kong Insomnia Docs
    • Kong Plugin Hub
    • Open Source
    • Kong Gateway
    • Kuma
    • Insomnia
    • Kong Community
    • Company
    • About Kong
    • Customers
    • Careers
    • Press
    • Events
    • Contact
    • Pricing
  • Terms
  • Privacy
  • Trust and Compliance
  • © Kong Inc. 2026