Agent sprawl is the next identity sprawl
According to Microsoft’s recent Cyber Pulse research, more than 80% of Fortune 500 companies are already deploying active AI agents built with low-code and no-code tools across their organizations. These agents now support everything from sales and finance to security operations and customer engagement, often without centralized oversight.
This rapid adoption introduces a familiar risk pattern. Just as identity sprawl once outpaced traditional governance models, AI agent sprawl is now accelerating beyond many organizations’ visibility and control capabilities. These systems can reason, access enterprise data, and trigger actions at machine speed. Without structured oversight, that autonomy quickly becomes exposure.
This rapid adoption introduces a familiar risk pattern. Just as identity sprawl once outpaced traditional governance models, AI agent sprawl is now accelerating beyond many organizations’ visibility and control capabilities. These systems can reason, access enterprise data, and trigger actions at machine speed. Without structured oversight, that autonomy quickly becomes exposure.
- The first risk in AI adoption is invisibility. Agents are often created inside business units, embedded in workflows, or deployed to solve narrow operational problems. Over time, they multiply.
A control plane begins with inventory. Security leaders must be able to answer fundamental questions: How many agents exist? Who created them? What are they connected to? What data can they access? If those answers are unclear, control does not exist.
Scroll to timestamp ~00:02:55 for more on this topic. - Agents are not humans, but they are not simple applications either. They are designed to reason like users and authenticate like workloads. That dual nature changes how they must be governed.
Each agent needs a distinct identity, defined permissions, and enforced least privilege. If an agent can read sensitive data, modify records, or trigger workflows, it must be treated as a first-class security principal. Extending identity governance to autonomous systems is not optional. It is foundational.
Scroll to timestamp ~00:04:45 for more on this topic. - One of the most important controls in AI governance is human sponsorship.
Every agent should have a clearly designated human sponsor responsible for reviewing permissions, validating continued need, and overseeing lifecycle changes. Without ownership, agents become active but unmanaged. Governance models that embed accountability prevent long-term drift and reduce the risk of silent privilege accumulation.
Scroll to timestamp ~00:07:05 for more on this topic. - Security leaders must understand how agents interact across the environment, and that means settings and permissions is only the start.
Agents increasingly connect to other agents, orchestrate workflows, and dynamically retrieve context from multiple systems. That interconnected behavior introduces new complexity and new attack paths. A control plane must provide visibility into relationships and behavior. Without that broader context, organizations are securing blind spots.
Scroll to timestamp ~00:03:45 for more on this topic. - AI is a double-edged sword. It increases defender productivity while simultaneously accelerating adversary capabilities.
Threat actors are already using AI to scale familiar attack patterns. Inside the enterprise, over-permissioned agents can amplify the blast radius of simple mistakes, from oversharing to unintended data exposure. Zero Trust principles must extend to AI agents. Trust nothing. Verify everything. Including autonomous systems operating on your behalf.
Scroll to timestamp ~00:10:20 for more on this topic.
Follow Microsoft Security