This is the Trace Id: b23128e4e6531cc8681302127a6bd767

 

Cyber Pulse: An AI Security Report

Recent Microsoft first-party telemetry and research shows that many organizations are adopting AI agents. Now is the time for leaders to implement observability, governance, and security.

Issue 1: Introduction

Frontier organizations are redefining how work gets done, with humans and agents operating side-by-side to elevate human ambition. Recent Microsoft data indicates that these human-agent teams are growing and becoming widely adopted globally.

AI agents are scaling faster than some companies can see them and that visibility gap is a business risk. Organizations urgently need effective governance and security to safely adopt agents, promote innovation, and reduce risk. Like human users, AI agents require protection through observability, governance, and strong security using Zero Trust principles. Enterprises succeeding in the next phase of AI adoption will be those that move with speed and bring business, IT, security, and developer teams together to observe, govern, and secure their AI transformation.

Across Microsoft’s ecosystem, customers are now building and deploying agents on every major platform—from Fabric and Foundry to Copilot Studio and Agent Builder—reflecting a broad shift toward AI-powered automation in the flow of work.

Agent building isn't limited to technical roles; today employees in various positions create and use agents in daily work. In fact, Microsoft data shows that over 80% of the Fortune 500 is deploying active agents built with low-code/no-code tools.1 With agent use expanding and transformation opportunities multiplying, now is the time to get foundational controls in place.

Just like for human employees, Zero Trust for agents means:
 

  • Least privilege access: Give every user, AI agent, or system only what they need—no more.

  • Explicit verification: Always confirm who or what is requesting access using identity, device health, location, risk level.

  • Assume compromise can occur: Design systems expecting that attackers will get inside.

Graphic stating over 80 percent of Fortune 500 companies use active low code agents tools Ask Copilot

AI agents scaling fast - across all regions and industries

Agent adoption is accelerating across regions worldwide—from EMEA to the Americas and Asia.2
 Bar chart showing regional distribution: EMEA 42 percent, United States 29 percent, Asia 19 percent, Americas 10 percent.
Microsoft is seeing agent growth across every industry. Financial services, manufacturing, and retail are leading in agent adoption. Financial services—including banking, capital markets, and insurance—now represents about 11% of all active agents worldwide.2 Manufacturing accounts for 13% of global agent usage, showing widespread adoption in factories, supply chains, and energy operations.2 Retail represents 9%, with agents used to improve customer experience, inventory management, and frontline processes.2

The catch: some of these agents are sanctioned by IT—others are not. Some of these agents are secure—others are not.
Overlapping circles showing industry distribution: Software and Technology 16%, Manufacturing 13%, Financial Services 11%, Retail 9%.

The risk of double agents

Rapid deployment of agents can outpace security and compliance controls, increasing the risk of shadow AI. Bad actors might exploit agents’ access and privileges, turning them into unintended “double agents.” Like human employees, an agent with too much access—or the wrong instructions—can become a vulnerability.

The threat of double agents being exploited if left unmanaged, mis-permissioned, or manipulated by untrusted input isn’t a theoretical risk. Recently, Microsoft’s Defender team identified a fraudulent campaign with multiple actors taking advantage of an AI attack technique known as “memory poisoning” to manipulate AI assistants’ memory in a persistent way, quietly steering future responses and weakening trust in the system’s accuracy.

In independent, secure testbed research conducted by Microsoft’s own AI Red Team, researchers documented how agents were misled by deceptive interface elements, such as following harmful instructions embedded in everyday content. The Red Team also discovered how agents’ reasoning could be subtly redirected through manipulated task framing. These findings illustrate why it is critical for enterprises to have full observability and management of all agents touching their enterprise, so that controls can be centrally enforced and risks centrally managed in an integrated approach.
29 percent of employees use unsanctioned AI agents for work tasks, according to a 2025 survey
According to Microsoft’s Data Security Index, only 47% of organizations across industries report they are implementing specific GenAI security controls,3 highlighting an opportunity for organizations to gain clear visibility necessary for safe AI adoption. Even more importantly, according to a multinational survey of over 1,700 data security professionals commissioned by Microsoft from Hypothesis Group, already 29% of employees have turned to unsanctioned AI agents for work tasks.4
Pie chart showing only 47 percent of organizations have security controls for generative AI usage.

As adoption accelerates, how many leaders recognize the underlying risk? How many leaders have visibility into their agent population? Unsupervised or ungoverned AI agents can compound risks in the enterprise—threatening security, business continuity, and reputation, all landing squarely at the feet of the CISO and the C-suite. This is the heart of a cyber risk dilemma. AI agents are bringing new opportunities to the workplace and are becoming woven into internal operations. But an agent’s risky behavior can amplify threats from within and create new failure modes for organizations unprepared to manage them.

 

The dual nature of AI has arrived: extraordinary innovation paired with unprecedented risks.

Getting the most out of your AI agents

Frontier firms are using the AI wave to modernize governance, cut unnecessary data exposure, and deploy enterprise-wide controls. They’re pairing that with a cultural shift: business leaders may own the AI strategy, but IT and security teams are now true partners in observability, governance, and safe experimentation. For these organizations, securing agents isn’t a constraint—it’s a competitive advantage, built on treating AI agents like humans and applying the same Zero Trust principles.

It starts with observability, as you can’t protect what you can’t see and you can’t manage what you don’t understand. Observability is having a control plane across all layers of the organization (IT, security, developers, and AI teams) to understand:

  • What agents exist
     

  • Who owns them
     

  • What systems and data they touch
     

  • How they behave


 

Observability includes five core areas:

  • A centralized registry acts as a single source of truth for all agents across the organization—sanctioned, third‑party, and emerging shadow agents. This inventory helps prevent agent sprawl, enables accountability, and supports discovery while allowing unsanctioned agents to be restricted or quarantined when necessary.
  • Each agent is governed using the same identity‑ and policy‑driven access controls applied to human users and applications. Least‑privilege permissions, enforced consistently, ensure agents can access only the data, systems, and workflows required to fulfill their purpose—no more, no less.
  • Real‑time dashboards and telemetry provide insight into how agents interact with people, data, and systems. Leaders can see where agents are operating, understanding dependencies, and monitoring behavior and impact—supporting faster detection of misuse, drift, or emerging risk.
  • Agents operate across Microsoft platforms, open‑source frameworks, and third‑party ecosystems under a consistent governance model. This interoperability allows agents to collaborate with people and other agents across workflows while remaining managed within the same enterprise controls.
  • Built‑in protections safeguard agents from internal misuse and external threats. Security signals, policy enforcement, and integrated tooling help organizations detect compromised or misaligned agents early and respond quickly—before issues escalate into business, regulatory, or reputational harm.

Closing the gap: AI governance and security checklist

The path to reining in AI risks is clear: treat AI agents with the same rigor as any employee or software service account. Here are seven things for your checklist.
  • Document each agent’s purpose and give them access to only what they need. No broad privileges.
  • Apply data protection rules to AI channels. Maintain audit trails and label AI-generated content.
  • Offer secure alternatives to curb shadow AI. Block unauthorized apps.
  • Update Business Continuity playbooks for AI scenarios. Run tabletop exercises and track observability metrics across identity, data, and threat planes.
  • Build AI governance and self-regulate now—document training data, assess bias, and establish human oversight across legal, data, and security—so regulatory compliance is built in, not bolted on later.
  • Elevate AI risk to the enterprise level—establishing executive accountability, measurable KPIs, and board level visibility alongside financial and operational risk.
  • Train employees on safe AI use. Encourage transparency and collaboration.

Organizations that succeed with AI agents will be those that prioritize observability, governance, and security. Achieving this requires collaboration across all teams and observability of AI agents at all layers of the organizations: IT professionals, security teams, AI teams, and developers, all of which can be managed and observed through a unified central control platform.

Agent 365 is Microsoft’s unified control plane for managing AI agents across an organization. It provides a centralized, enterprise-grade system to register, govern, secure, observe, and operate AI agents—whether they are built on Microsoft platforms, opensource frameworks, or third-party systems. More information on Microsoft products and services that help secure AI and agents is available here.
Microsoft Security resource cards showing guides for AI applications, AI governance, and AI compliance on blue background
Securing AI guides

Microsoft guides for securing the AI-powered enterprise

Get guidance to help build a strong foundation for AI, including how to govern and secure AI applications, and help maintain compliance.

More resources

Person working on a laptop at a shared workspace desk while checking information on a smartphone

Microsoft Data Security Index

Get insights on how generative AI is reshaping data security.
Presenter standing on stage with on-screen callouts reading understanding the risks placing the right controls measuring and monitoring

Accelerating opportunity with trusted AI

Hear Microsoft leaders share practical ways to build trust into every stage of your AI journey.
A blue and white icon showing a paper inside an envelope with the text New.

Get the CISO Digest

Stay ahead with expert insights, industry trends, and security research in this bimonthly email series.
  1. [1]
    Based on Microsoft first-party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.
  2. [2]
    Industry and Regional Agent Metrics were created using Microsoft first-party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.
  3. [3]
    July 2025 multi-national survey of over 1,700 data security professionals commissioned by Microsoft from Hypothesis Group
  4. [4]

    Microsoft Data Security Index 2026: Unifying Data Protection and AI Innovation, Microsoft Security, 2026

     

    Methodology:

     

    Industry and Regional Agent Metrics were created using Microsoft first-party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the past 28 days of November 2025.

     

    2026 Data Security Index:

     

    A 25-minute multinational online survey was conducted from July 16 to August 11, 2025, among 1,725 data security leaders.

     

    Questions centered around the data security landscape, data security incidents, securing employee use of GenAI, and the use of GenAI in data security programs to highlight comparisons to 2024.

     

    One-hour in-depth interviews were conducted with 10 data security leaders in the US and UK to garner stories about how they are approaching data security in their organizations.

     

    Definitions:

     

    Active Agents are 1) deployed to production and 2) have some “real activity” associated with them in the past 28 days.

     

    “Real activity” is defined as 1+ engagement with a user (assistive agents) OR 1+ autonomous runs (autonomous agents).

Follow Microsoft Security