TL;DR

MCP has rapidly become a common way to expose tools to AI agents, but its architecture introduces operational, security, and coordination problems. Many of the benefits are minimal compared with existing tool-calling patterns, and real incidents and vulnerabilities highlight material risks.

What happened

MCP (a standardized protocol for exposing tools to AI agents) has seen quick adoption because it makes publishing tool servers simple and attracts attention for projects. Proponents present it as a solution to the “NxM” connector problem—reconciling N agents with M toolsets—but the core need it fills is mainly serializing tool schemas and responses. MCP runs each tool server as a separate long-lived process defined by a JSON manifest, which abstracts schema generation and invocation away from the application but also forces every tool call across a process boundary. That design creates coordination issues: tools launched independently lack awareness of other tools, producing incoherent toolboxes as the number of tools grows. Operationally, MCP subprocesses inherit only a small set of ENV vars (USER, HOME, PATH), complicating runtime dependency resolution, and many servers remain idle yet consume resources. Security problems have surfaced too: unauthenticated servers, directory traversal in an Anthropic server, supply-chain risks, and several CVEs and data-exposure incidents tied to MCP deployments.

Why it matters

  • Security: multiple high-severity CVEs and unauthenticated servers show MCP deployments can expose systems and data.
  • Operational overhead: separate server processes complicate dependency management, resource usage, and debugging for users.
  • Agent effectiveness: large, disorganized toolsets reduce agent performance and increase token costs for tool instructions.
  • Trust model shift: MCP shifts trust from centralized API controls to third-party daemons running on users’ machines, undermining existing security practices.

Key facts

  • MCP aims to address the “NxM problem” of connecting many agents to many toolsets by standardizing how tools are exposed and invoked.
  • Function-calling can be performed without MCP; models receive tool lists and return JSON parameters directly in current APIs.
  • Gemini exposes tools via functionDeclarations nested inside a tools array, while OpenAI uses a flat tools array with type:"function".
  • MCP runs each server as a separate process started by a JSON configuration; tool logic executes out-of-process and crosses process boundaries on each call.
  • Tool primitives and resources exist in MCP, but practical adoption has been dominated by tool calling.
  • Agents generally perform worse as the number of tools increases; OpenAI recommends keeping tool counts well below 20.
  • MCP servers inherit only USER, HOME, and PATH environment variables, which can break common runtime setups like nvm or virtual environments.
  • A scan found 492 MCP servers running without client authentication or traffic encryption; Anthropic’s Filesystem MCP Server had a directory traversal issue (CVE-2025-53110).
  • Documented security incidents and CVEs tied to MCP-related projects include CVE-2025-6514, CVE-2025-49596, and CVE-2025-53967, plus an Asana tenant-isolation data exposure and a Supabase data exfiltration via prompt injection → tool call.

What to watch next

  • Whether vendors or the MCP specification add mandatory authentication, signing, or provenance checks for servers (not confirmed in the source).
  • Enterprise policies or platform vendors restricting or banning MCP-based servers due to supply-chain and runtime risks (not confirmed in the source).
  • Efforts by agent frameworks to standardize tool schemas or to provide central orchestration that avoids spawning many independent runtimes (not confirmed in the source).

Quick glossary

  • MCP: A protocol/specification for publishing and invoking tool servers for AI agents, typically run as separate long-lived processes.
  • NxM problem: The integration challenge of connecting N agents to M toolsets, which can naively require N×M bespoke connectors.
  • Function-calling / tool-calling: A pattern where a language model returns structured output (often JSON) indicating a function or tool to invoke, including parameters.
  • Process boundary: The separation between distinct operating system processes; crossing it incurs serialization, IPC, and runtime overhead.
  • Prompt injection: An attack technique that manipulates model prompts to cause unintended behavior, such as invoking tools or exfiltrating data.

Reader FAQ

Is MCP required for function-calling?
No. The source notes function-calling works by passing tool schemas to models; MCP is not necessary for this capability.

Does MCP make tool integration easier for publishers?
Yes; MCP lowers publication overhead by letting authors ship a small manifest and launch command to expose tools to agents.

Are there known security incidents related to MCP?
Yes. The source lists several CVEs (including CVE-2025-6514, CVE-2025-49596, CVE-2025-53967), a directory traversal in an Anthropic server, a Supabase data exfiltration, and an Asana tenant isolation exposure.

Will MCP replace existing agent frameworks and tooling?
Not confirmed in the source.

MCP is a fad December 12, 2025 · 14 min read Overview​ MCP has taken off as the standardized platform for AI integrations, and it's difficult to justify not supporting…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *