Skip to main content
Microsoft Security

Organizations are rapidly adopting Copilot Studio agents, but threat actors are equally fast at exploiting misconfigured AI workflows. Mis-sharing, unsafe orchestration, and weak authentication create new identity and data‑access paths that traditional controls don’t monitor. As AI agents become integrated into operational systems, exposure becomes both easier and more dangerous. Understanding and detecting these misconfigurations early is now a core part of AI security posture.

Copilot Studio agents are becoming a core part of business workflows- automating tasks, accessing data, and interacting with systems at scale.

That power cuts both ways. In real environments, we repeatedly see small, well‑intentioned configuration choices turn into security gaps: agents shared too broadly, exposed without authentication, running risky actions, or operating with excessive privileges. These issues rarely look dangerous- until they are abused.

If you want to find and stop these risks before they turn into incidents, this post is for you. We break down ten common Copilot Studio agent misconfigurations we observe in the wild and show how to detect them using Microsoft Defender and Advanced Hunting via the relevant Community Hunting Queries.

Short on time? Start with the table below. It gives you a one‑page view of the risks, their impact, and the exact detections that surface them. If something looks familiar, jump straight to the relevant scenario and mitigation.

Each section then dives deeper into a specific risk and recommended mitigations- so you can move from awareness to action, fast.

#Misconfiguration & RiskSecurity ImpactAdvanced Hunting Community Queries (go to: Security portal>Advanced hunting>Queries> Community Queries>AI Agent folder)
1Agent shared with entire organization or broad groupsUnintended access, misuse, expanded attack surface• AI Agents – Organization or Multi‑tenant Shared
2Agents that do not require authenticationPublic exposure, unauthorized access, data leakage• AI Agents – No Authentication Required
3Agents with HTTP Request actions using risky configurationsGovernance bypass, insecure communications, unintended API access• AI Agents – HTTP Requests to connector endpoints
• AI Agents – HTTP Requests to non‑HTTPS endpoints
• AI Agents – HTTP Requests to non‑standard ports
4Agents capable of email‑based data exfiltrationData exfiltration via prompt injection or misconfiguration• AI Agents – Sending email to AI‑controlled input values
• AI Agents – Sending email to external mailboxes
5Dormant connections, actions, or agentsHidden attack surface, stale privileged access• AI Agents – Published Dormant (30d)
• AI Agents – Unpublished Unmodified (30d)
• AI Agents – Unused Actions
• AI Agents – Dormant Author Authentication Connection
6Agents using author (maker) authenticationPrivilege escalation, separation of duties bypass‑of‑duties bypass• AI Agents – Published Agents with Author Authentication
• AI Agents – MCP Tool with Maker Credentials
7Agents containing hard‑coded credentialsCredential leakage, unauthorized system access• AI Agents – Hard‑coded Credentials in Topics or Actions
8Agents with Model Context Protocol (MCP) tools configuredUndocumented access paths, unintended system interactions• AI Agents – MCP Tool Configured
9Agents with generative orchestration lacking instructionsPrompt abuse, behavior drift, unintended actions• AI Agents – Published Generative Orchestration without Instructions
10Orphaned agents (no active owner)Lack of governance, outdated logic, unmanaged access• AI Agents – Orphaned Agents with Disabled Owners

Top 10 risks you can detect and prevent

Imagine this scenario: A help desk agent is created in your organization with simple instructions.

The maker, someone from the support team, connects it to an organizational Dataverse using an MCP tool, so it can pull relevant customer information from internal tables and provide better answers. So far, so good.

Then the maker decides, on their own, that the agent doesn’t need authentication. After all, it’s only shared internally, and the data belongs to employees anyway (See example in Figure 1). That might already sound suspicious to you. But it doesn’t to everyone.

You might be surprised how often agents like this exist in real environments and how rarely security teams get an active signal when they’re created. No alert. No review. Just another helpful agent quietly going live.

Now here’s the question: Out of the 10 risks described in this article, how many do you think are already present in this simple agent?

The answer comes at the end of the blog.

Figure 1 – Example Help Desk agent.

1: Agent shared with the entire organization or broad groups

Sharing an agent with your entire organization or broad security groups exposes its capabilities without proper access boundaries. While convenient, this practice expands the attack surface. Users unfamiliar with the agent’s purpose might unintentionally trigger sensitive actions, and threat actors with minimal access could use the agent as an entry point.

In many organizations, this risk occurs because broad sharing is fast and easy, often lacking controls to ensure only the right users have access. This results in agents being visible to everyone, including users with unrelated roles or inappropriate permissions. This visibility increases the risk of data exposure, misuse, and unintended activation of sensitive connectors or actions.

2: Agents that do not require authentication

Agents that you can access without authentication, or that only prompt for authentication on demand, create a significant exposure point. When an agent is publicly reachable or unauthenticated, anyone with the link can use its capabilities. Even if the agent appears harmless, its topics, actions, or knowledge sources might unintentionally reveal internal information or allow interactions that were never for public access.

This gap appears because authentication was deactivated for testing, left in its default state, or misunderstood as optional. The results in an agent that behaves like a public entry point into organizational data or logic. Without proper controls, this creates a risk of data leakage, unintended actions, and misuse by external or anonymous users.

3: Agents with HTTP request action with risky configurations

Agents that perform direct HTTP requests introduce a unique risks, especially when those requests target non-standard ports, insecure schemes, or sensitive services that already have built in Power Platform connectors. These patterns often bypass the governance, validation, throttling, and identity controls that connectors provide. As a result, they can expose the organization to misconfigurations, information disclosure, or unintended privilege escalation.

These configurations appear unintentionally. A maker might copy a sample request, test an internal endpoint, or use HTTP actions for flexibility during testing and convenience. Without proper review, this can lead to agents issuing unsecured calls over HTTP or invoking critical Microsoft APIs directly through URLs instead of secured connectors. Each of these behaviors represent an opportunity for misuse or accidental exposure of organizational data.

4: Agents capable of email-based aata exfiltration

Agents that send emails using dynamic or externally controlled inputs present a significant risk. When an agent uses generative orchestration to send email, the orchestrator determines the recipient and message content at runtime. In a successful cross-prompt injection (XPIA) attack, a threat actor could instruct the agent to send internal data to external recipients.

A similar risk exists when an agent is explicitly configured to send emails to external domains. Even for legitimate business scenarios, unaudited outbound email can allow sensitive information to leave the organization. Because email is an immediate outbound channel, any misconfiguration can lead to unmonitored data exposure.

Many organizations create this gap unintentionally. Makers often use email actions for testing, notifications, or workflow automation without restricting recipient fields. Without safeguards, these agents can become exfiltration channels for any user who triggers them or for a threat actor exploiting generative orchestration paths.

5: Dormant connections, actions, or agents within the organization

Dormant agents and unused components might seem harmless, but they can create significant organizational risk. Unmonitored entry points often lack active ownership. These include agents that haven’t been invoked for weeks, unpublished drafts, or actions using Maker authentication. When these elements stay in your environment without oversight, they might contain outdated logic or sensitive connections That don’t meet current security standards.

Dormant assets are especially risky because they often fall outside normal operational visibility. While teams focus on active agents, older configurations are easily forgotten. Threat actors, frequently target exactly these blind spots. For example:

  • A published but unused agent can still be called.
  • A dormant maker-authenticated action might trigger elevated operations.
  • Unused actions in classic orchestration can expose sensitive connectors if they are activated.

Without proper governance, these artifacts can expose sensitive connectors if they are activated.

6: Agents using author authentication

When agents use the maker’s personal authentication, they act on behalf of the creator rather than the end user.  In this configuration, every user of the agent inherits the maker’s permissions. If those permissions include access to sensitive data, privileged operations, or high impact connectors, the agent becomes a path for privilege escalation.

This exposure often happens unintentionally. Makers might allow author authentication for convenience during development or testing because it is the default setting of certain tools. However, once published, the agent continues to run with elevated permissions even when invoked by regular users. In more severe cases, Model Context Protocol (MCP) tools configured with maker credentials allow threat actors to trigger operations that rely directly on the creator’s identity.

Author authentication weakens separation of duties and bypasses the principle of least privilege. It also increases the risk of credential misuse, unauthorized data access, and unintended lateral movement

7: Agents containing hard-coded credentials

Agents that contain hard-coded credentials inside topics or actions introduce a severe security risk. Clear-text secrets embedded directly in agent logic can be read, copied, or extracted by unintended users or automated systems. This often occurs when makers paste API keys, authentication tokens, or connection strings during development or debugging, and the values remain embedded in the production configuration. Such credentials can expose access to external services, internal systems, or sensitive APIs, enabling unauthorized access or lateral movement.

Beyond the immediate leakage risk, hard-coded credentials bypass the standard enterprise controls normally applied to secure secret storage. They are not rotated, not governed by Key Vault policies, and not protected by environment variable isolation. As a result, even basic visibility into agent definitions may expose valuable secrets.

8: Agents with model context protocol (MCP) tools configured

AI agents that include Model Context Protocol (MCP) tools provide a powerful way to integrate with external systems or run custom logic. However, if these MCP tools aren’t actively maintained or reviewed, they can introduce undocumented access patterns into the environment.

This risk when MCP configurations are:

  • Activated by default
  • Copied between agents
  • Left active after the original integration is no longer needed

Unmonitored MCP tools might expose capabilities that exceed the agent’s intended purpose. This is especially true if they allow access to privileged operations or sensitive data sources. Without regular oversight, these tools can become hidden entry points that user or threat actors might trigger unintended system interactions.

9: Agents with generative orchestration lacking instructions

AI agents that use generative orchestration without defined instructions face a high risk of unintended behavior. Instructions are the primary way to align a generative model with its intended purpose. If instructions are missing, incomplete, or misconfigured, the orchestrator lacks the context needed to limit its output. This makes the agent more vulnerable to user influence from user inputs or hostile prompts.

A lack of guidance can cause an agent to;

  • Drift from its expected behaviors. The agent might not follow its intended logic.
  • Use unexpected reasoning. The model might follow logic paths that don’t align with business needs.
  • Interact with connected systems in unintended ways. The agent might trigger actions that were never planned.

For organizations that need predictable and safe behavior, behavior, missing instructions area significant configuration gap.

10: Orphaned agents

Orphaned agents are agents whose owners are no longer with organization or their accounts deactivated. Without a valid owner, no one is responsible for oversight, maintenance, updates, or lifecycle management. These agents might continue to run, interact with users, or access data without an accountable individual ensuring the configuration remains secure.

Because ownerless agents bypass standard review cycles, they often contain outdated logic, deprecated connections, or sensitive access patterns that don’t align with current organizational requirements.

Remember the help desk agent we started with? That simple agent setup quietly checked off more than half of the risks in this list.

Keep reading and running the Advanced Hunting queries in the AI Agents folder, to find agents carrying these risks in your own environment before it’s too late.

Figure 2: The example Help Desk agent was detected by a query for unauthenticated agents.

From findings to fixes: A practical mitigation playbook

The 10 risks described above manifest in different ways, but they consistently stem from a small set of underlying security gaps: over‑exposure, weak authentication boundaries, unsafe orchestration, and missing lifecycle governance.

Figure 3 – Underlying security gaps.

Damage doesn’t begin with the attack. It starts when risks are left untreated.

The section below is a practical checklist of validations and actions that help close common agent security gaps before they’re exploited. Read it once, apply it consistently, and save yourself the cost of cleaning up later. Fixing security debt is always more expensive than preventing it.

1. Verify intent and ownership

Before changing configurations, confirm whether the agent’s behavior is intentional and still aligned with business needs.

  • Validate the business justification for broad sharing, public access, external communication, or elevated permissions with the agent owner.
  • Confirm whether agents without authentication are explicitly designed for public use and whether this aligns with organizational policy.
  • Review agent topics, actions, and knowledge sources to ensure no internal, sensitive, or proprietary information is exposed unintentionally.
  • Ensure every agent has an active, accountable owner. Reassign ownership for orphaned agents or retire agents that no longer have a clear purpose. For step-by-step instructions, see Microsoft Copilot Studio: Agent ownership reassignment.
  • Validate whether dormant agents, connections, or actions are still required, and decommission those that are not.
  • Perform periodic reviews for agents and establish a clear organizational policy for agents’ creation. For more information, see Configure data policies for agents.

2. Reduce exposure and tighten access boundaries

Most Copilot Studio agent risks are amplified by unnecessary exposure. Reducing who can reach the agent, and what it can reach, significantly lowers risk.

  • Restrict agent sharing to well‑scoped, role‑based security groups instead of entire organizations or broad groups. See Control how agents are shared.
  • Establish and enforce organizational policies defining when broad sharing or public access is allowed and what approvals are required.
  • Enforce full authentication by default. Only allow unauthenticated access when explicitly required and approved. For more information see Configure user authentication.
  • Limit outbound communication paths:
    • Restrict email actions to approved domains or hard‑coded recipients.
    • Avoid AI‑controlled dynamic inputs for sensitive outbound actions such as email or HTTP requests.
  • Perform periodic reviews of shared agents to ensure visibility and access remain appropriate over time.

3. Enforce strong authentication and least privilege

Agents must not inherit more privilege than necessary, especially through development shortcuts.

Replace author (maker) authentication with user‑based or system‑based authentication wherever possible. For more information, see Control maker-provided credentials for authentication – Microsoft Copilot Studio | Microsoft Learn and Configure user authentication for actions.

  • Review all actions and connectors that run under maker credentials and reconfigure those that expose sensitive or high‑impact services.
  • Audit MCP tools that rely on creator credentials and remove or update them if they are no longer required.
  • Apply the principle of least privilege to all connectors, actions, and data access paths, even when broad sharing is justified.

4. Harden orchestration and dynamic behavior

Generative agents require explicit guardrails to prevent unintended or unsafe behavior.

  • Ensure clear, well‑structured instructions are configured for generative orchestration to define the agent’s purpose, constraints, and expected behavior. For more information, see Orchestrate agent behavior with generative AI.
  • Avoid allowing the model to dynamically decide:
    • Email recipients
    • External endpoints
    • Execution logic for sensitive actions
  • Review HTTP Request actions carefully:
    • Confirm endpoint, scheme, and port are required for the intended use case.
    • Prefer built‑in Power Platform connectors over raw HTTP requests to benefit from authentication, governance, logging, and policy enforcement.
    • Enforce HTTPS and avoid non‑standard ports unless explicitly approved.

5. Eliminate Dead Weight and Protect Secrets

Unused capabilities and embedded secrets quietly expand the attack surface.

  • Remove or deactivate:
    • Dormant agents
    • Unpublished or unmodified agents
    • Unused actions
    • Stale connections
    • Outdated or unnecessary MCP tool configurations
  • Clean up Maker‑authenticated actions and classic orchestration actions that are no longer referenced.
  • Move all secrets to Azure Key Vault and reference them via environment variables instead of embedding them in agent logic.
  • When Key Vault usage is not feasible, enable secure input handling to protect sensitive values.
  • Treat agents as production assets, not experiments, and include them in regular lifecycle and governance reviews.

Effective posture management is essential for maintaining a secure and predictable Copilot Studio environment. As agents grow in capability and integrate with increasingly sensitive systems, organizations must adopt structured governance practices that identify risks early and enforce consistent configuration standards.

The scenarios and detection rules presented in this blog provide a foundation to help you;

  • Discovering common security gaps
  • Strengthening oversight
  • Reduce the overall attack surface

By combining automated detection with clear operational policies, you can ensure that their Copilot Studio agents remain secure, aligned, and resilient.

This research is provided by Microsoft Defender Security Research with contributions from Dor Edry and Uri Oren.

Learn more

Related posts