Tool-space Interference: An emerging problem for LLM agents
- Karen Easterbrook, Microsoft; Tyler Payne, Microsoft
Tool-space interference occurs when adding an otherwise reasonable agent or tool to a team or agent reduces end-to-end task performance. We study the phenomenon in an analysis of 1470 MCP servers and make practical suggestions for MCP client, server, and marketplace developers.
Explore more
- Tool-space interference in the MCP era: Designing for agent compatibility at scale
- MCP Interviewer on GitHub (opens in new tab)
Transcript
Tool-space Interference: An emerging problem for LLM agents
[MUSIC]
[MUSIC FADES INTO SWEEPING SOUND]
KAREN EASTERBROOK: As we progress in all directions of research at MSR [Microsoft Research], we stay true to a core part of our mission: advancing AI responsibly by understanding not just what these systems can do but how and why they sometimes fail.
Tyler Payne, a senior research software engineer with Microsoft Research AI Frontiers in New York City, is investigating how AI agents perform when they’re given access to multiple tools—from calculators to code interpreters. Surprisingly, his findings show that adding more tools can sometimes hurt performance, introducing “tool-space interference.”
Over to you, Tyler.
[MUSIC]
[MUSIC FADES INTO SWEEPING SOUND]
TYLER PAYNE: Hi, my name’s Tyler, and I’m a research engineer at Microsoft Research AI Frontiers.
Today, I’m going to be talking about an emerging problem for LLM agents that we call tool-space interference. This was an exploration done over the summer of 2025 in collaboration with my colleagues here at AI Frontiers.
AI agents powered by LLMs have become a popular topic in both research and industry. In general, an agent is a system that can sense and affect its environment in pursuit of a goal. LLM agents are usually software systems that equip LLMs with tools they can use to understand and manipulate their environment to complete tasks on behalf of their users. Often these agents act in computer environments, where they can browse the web, write code, and manipulate the file system.
For example, Magentic-One is a popular generalist agent developed by my collaborators here at MSR. It is designed as a multi-agent system, which is a useful programming abstraction that delegates certain capabilities to subagents. Specifically, in Magentic-One, these subagents are the Coder, Terminal, Web Surfer, and File Surfer, all of which are coordinated by a top-level Orchestrator agent.
Now let’s imagine you ask Magentic-One to solve a git-related task. First, the Orchestrator must decide whether to delegate that task to the Terminal agent or the Web Surfer agent. But when building a system like Magentic-One, we can evaluate its behavior on such tasks and fix issues by adjusting any part of the system.
So, for example, we can provide in-context examples to the Orchestrator if it decides to delegate to the wrong subagent. Likewise, we can adjust the tools and prompts of these subagents directly. In this way, Magentic-One is a vertically integrated system.
But in the past year, the Model Context Protocol, or MCP, has exploded in popularity. MCP enables developers to bundle their tools into a server that can be easily shared and consumed by LLM agents. Most popular LLM agents like Claude Code, Cursor, and GitHub Copilot already support MCP servers. This lets any user extend their agent at runtime, breaking the assumptions of vertical integration.
Now while this horizontal extensibility is exciting in principle, in practice, we observe that it can actually reduce LLM agents’ performance. We call this phenomenon tool-space interference.
In order to study tool-space interference, we developed MCP Interviewer, a CLI tool that automatically analyzes MCP servers, collecting descriptive statistics like the number of tools they provide, the depth and length of those tools’ schemas, and many more features. It can also use an LLM to generate a functional test plan that invokes each of the server’s tools to test that they behave as expected. You can also use MCP Interviewer to do qualitative LLM-as-a-judge evaluation of the server.
We’re excited that MSR enables us to share these tools with the world, and we’ve open sourced the MCP Interviewer on GitHub.
Back to the research. We collected nearly 1,500 real MCP servers from public registries, including Smithery.ai and Docker MCP Hub. We then ran the MCP Interviewer on each of these servers and analyzed the results, which we lay out in detail in our blog post on the MSR blog.
To recap our main findings, we identified a few common issues that can cause tool-space interference. First of all is tool name collisions. Two tools cannot have the same name, and LLM provider APIs will reject requests if there are name collisions between tools. MCP provides no formal guidance on namespacing, and so clients have had to each develop their own strategies like prefixing the server name before the tool name. Beyond exact collisions, though, tool names can also have significant semantic overlap like “search,” “web_search,” “bing_search,” and “google_search.” This can also confuse agents.
Next, we identified servers that expose too many tools. OpenAI’s API accepts a maximum of 128 tools, and their documentation recommends keeping that number well below 20. But we observe many servers above this 20-tool threshold.
Long contexts can also degrade LLM tool-calling performance, and MCP provides no limit on the length of tool responses. We identified some tools that returned more than 128,000 tokens in a single response, overflowing the available context of models like GPT-4o and reducing the number of possible tool calls for other long-context models like Gemini.
Finally, different models need to be prompted differently. For example, OpenAI recommends providing in-context examples of tool calls for chat completion models but discourages them for reasoning models. An MCP server generally does not know what model is connected to its client, and so its tool descriptions may work better for some models than others.
So what can you do?
As a user of MCP servers, you can use the MCP Interviewer tool to test servers before using them. As the developer of an MCP client, you can intercept long tool responses before submitting them to your LLM provider. As an MCP server developer, you should expose as few tools as possible, have short tool responses, unique and descriptive tool names, and you should report what models and clients you tested your server with. MCP marketplaces should also test uploaded servers and report their findings and even reject servers not meeting certain minimum criteria like exceeding maximum tool counts.
To learn more, please read our blog post and check out the MCP Interviewer on GitHub.
-
-
Karen Easterbrook
Senior Director
-
Tyler Payne
Senior Research Software Engineer
-
-
Forum de recherche : Season 2, Episode 2
-
-
Accelerating MRI image reconstruction with Tyger
- Karen Easterbrook,
- Ilyana Rosenberg
-
-
-