This is the Trace Id: 1792b28f4eb190344d0101c95285a46e

Inside the Security Forum: How leaders are preparing for the agentic era

The signal you might have missed

Security used to have cycles. Now it has velocity. At the 2025 Microsoft Ignite Security Forum, one message cut through everything else: AI and agents are rewriting both threat actor tradecraft and defender expectations, and the organizations that adapt early will have a real advantage. This year our stage was filled with an array of security and IT experts comparing notes on what’s changing, what’s not, and what needs to be confronted head-on.

Security Forum is designed to set the stage for the week ahead at Ignite or provide the why behind the what tied to our latest innovations. In case you missed it, those innovations are centered on the shift to ambient, autonomous, and agentic computing. Let’s dive into some of the discussions and guidance shared by leaders and practitioners from the stage.

Resetting the security mindset for the AI and agent Era

Vasu Jakkal, Corporate Vice President, Microsoft Security Business opened Security Forum by reminding the room how quickly the ground has shifted. Two years after generative AI “rocked our world,” organizations are already operating at agent scale: 82% of leaders are deploying or planning to deploy AI agents, with IDC projecting 1.3 billion agents by 20281.

And, threat actors are moving just as fast, using AI to boost phishing effectiveness 4.5x and exploit new surfaces like prompts, context windows, and automated workflows2. Her point was direct: AI is accelerating both opportunity and risk, and defenders no longer have the luxury of sequential adoption curves.

Trust, she argued, is the real unlock for the agentic era. Microsoft’s Secure Future Initiative, now spanning 34,000 engineers, is designed to harden identity, data, and systems so AI can operate securely by default. And while Microsoft sees more than 100 trillion signals per day2, the signal volume only matters if defenders can act at the speed AI demands. As she put it: “To unlock the potential of agentic AI, we must start with trust, and security is the root of that trust.”
Vasu’s guidance to leaders was crisp and grounded:
  • Treat AI and agent risk as a now problem: The attack surface is already here and shouldn’t wait for a future maturity curve.
  • Build trust into every AI system: Identity, governance, data controls, telemetry, and oversight must be foundations, not retrofits.
  • Elevate speed as a strategic advantage: Slow detection, slow containment, or slow decision-making will break resilience in an agent-driven world.

The threat landscape you’re defending against

Watch: Threat briefing: The state of cybercrime and AI’s double-edged sword

Sherrod DeGrippo, Director of Threat Intelligence Strategy, opened with a direct assessment of the current threat environment: “Every year threats move faster… threat actors have new techniques, new levels of sophistication”. This is all expected behavior that we’ve seen year over year; however, AI has pushed us beyond that scope and accelerated all three aspects.

She reframed the landscape not as isolated groups or individual campaigns, but as a fully interconnected marketplace, “an entire cottage industry of interconnected capabilities.” Crime, she emphasized, has become operationalized at scale. Professionalized threat actors are using professional tools, and AI is now woven directly into the tradecraft, accelerating reconnaissance, phishing, and malware development in ways defenders must confront head-on.

Crane Hassold, Principal Security Researcher, carried that thread forward by highlighting the deeper shift beneath it: information has become the primary prize.

Social engineering remains the dominant vector, “the same tactics… used for literally thousands of years” now automated through AI, and the entire ecosystem has reorganized around data theft rather than disruption.

In his words, “information has really become the crown jewel” for cybercriminals, with 80% of Microsoft’s incident response engagements in the past year involving some form of data collection. Crane outlined how credentials from a single compromised user can fuel phishing, cloud pivots, account takeovers, intelligence collection, and business email compromise, all amplified by a thriving as-a-service economy that includes initial access brokers, phishing-as-a-service, and modular malware kits.
Sherrod and Crane offer the following guidance based on what we see across the threat landscape:
  • Build explicit oversight into AI-enabled security processes: Secure-by-default means nothing without human validation.
  • Expect adversarial manipulation of agentic AI: Threat actors can bias datasets, alter signals, or weaponize automation. Governance and transparency must be active defense mechanisms.
  • Don’t underestimate the basics: Phishing, credential theft, malware, and deepfakes are the same attacks now just faster, cleaner, and more automated.
  • Defensive upside remains strong: AI doesn’t change infrastructure-level detection signals; defenders can still identify malicious origins, context, and behavior.

How security leaders are operationalizing AI today

Watch: The impact of AI on security strategy

One line from our security leader panel set the tone for the entire session:
“We’re no longer just defending against hackers; we’re defending against AI-powered adversaries,” said John Israel, Global CISO, KPMG.

Moderated by Steve Dispensa, CVP for Microsoft’s Security Solution Area, the conversation with Uma Arjunan (SVP & Global Head of Engineering – Foundational Platforms, Ford) and John Israel was less about AI hype and more about what’s actually happening inside large, complex enterprises.

Both leaders agreed: the threat landscape has leveled up. AI is supercharging phishing, deepfakes, and vulnerability exploitation, collapsing the time from discovery to weaponization into hours.

At Ford, Uma sees this play out across a deeply interconnected environment—cloud, SaaS, IT, and manufacturing systems:
The human layer—identity, trust, and process—has become one of the most critical attack surfaces.

The good news: defenders are using the same technology to fight back.
KPMG’s approach is threefold: secure from AI threats, secure with AI, and secure the AI itself. As “client zero,” they deploy AI internally first, then take those lessons to customers. Ford is treating security as a data problem. AI correlates and prioritizes millions of alerts, with tools like Microsoft Copilot surfacing context and recommended actions, helping teams move from overwhelmed to proactive.
The panel closed with clear guidance for security leaders:
  • Treat AI and your data pipelines as core infrastructure, not experiments.
  • Invest in a trusted data foundation: classification, labeling, and governance.
  • Use AI to shift from reactive to predictive operations.
  • Upskill your teams and give them safe, internal AI platforms to work with.
  • Start thinking beyond just security leaders—step into the role of Chief Secure Transformation Officer for your organization, not just a CISO.

“Build security that learns faster than attackers do—and secure AI like it’s already part of your critical infrastructure.”- Uma Arjunan

Pragmatic lessons from the field

Watch: Secure Future Initiative: Applying security best practices in the age of AI

Members representing Microsoft’s Secure Future Initiative (SFI) shifted the discussion to operational guidance. Ann Johnson, CVP and Deputy CISO, framed the conversation around practicality, bringing together Joy Chik, Corporate Vice President, Security Solution Area, and Kris Burkhardt, Accenture’s CISO.

Their shared message was clear: securing AI-era systems requires discipline, cultural transformation, and early intervention to prevent the same kind of unmanaged sprawl organizations experienced during the first wave of cloud adoption. The group shared actional customer resources from SFI’s Patterns & Practices, with more noted in the November 2025 SFI Progress Report.

“The threat landscape will continue evolving not just in sophistication, but in the terrains attackers move through,” said Joy. That recognition pushed Microsoft toward a posture where identity is only one of several attack surfaces that need to be secured with equal rigor.

Meanwhile, Kris provided insight from Accenture’s transformation that exposed how rapidly unmanaged systems accumulate and how much discipline it takes to pull an enterprise back into governance. His point landed hard: “It’s a game of hearts and minds: you have to make it more interesting for teams to come into the governed space than to stay out of it.”

He tied this directly to AI, noting that organizations are already seeing agent sprawl mirroring early cloud sprawl. The only defense is visibility, governance, and a cultural model where developers feel accountable for the right security decisions without unnecessary friction. Ann closed the loop by reinforcing the operational reality facing everyone in the room. Agents are easy to build, which means they’re proliferating across organizations whether leaders realize it or not.

Her advice was direct: “Everybody’s building agents — whether you know it or not — and if you don’t get in early with visibility and governance, you’ll inherit sprawl you can’t unwind.”

Hardening the AI stack and securing the new attack surfaces

Watch: Agentic security threats and best practices

David Weston, CVP Security, offered a technical reality check on what it means to secure AI systems in production. His baseline? Attackers aren’t waiting for enterprises to finish their AI transformation. They are already probing models, agents, and developer workflows for weaknesses, and defenders need to secure the full AI stack with the same rigor applied to traditional software.

Weston broke the guidance into four clear defensive priorities:

  1. Monitor and secure LLM activity on developer endpoints, where many early-stage attacks (like XPIA-style abuses) first show up.
     

  2. Instrument and gate the connections between agents and models, using strong authentication and identity for machine-to-machine communication.
     

  3. Audit agent capabilities and enforce least-privilege, since overly-permissive agent access was at the root of several high-profile failures.
     

  4. Contain and detect misuse through defense-in-depth, helping ensure that even successful attacks don’t escalate unchecked.

Weston focused on practicality: secure what you already have, instrument what you’re deploying next, and treat AI like any other critical workload, just one that moves faster.

What Security Forum made clear

This year’s Microsoft Ignite Security Forum laid out both the challenge and the blueprint. With AI, agents, and autonomous systems moving into every layer of enterprise operations, leaders have a narrow but decisive window to establish governance, implement secure-by-design patterns, and elevate their approach to match the new speed of threat actors.

To learn more about the guidance provided, review our patterns and practices, download our latest Microsoft Digital Defense Report and watch sessions from the Security Forum.

More like this

A close-up of a woman with black hair and bangs wearing a necklace.
7 minutes

From Insight to Disruption

The Microsoft Threat Intelligence Podcast.

Microsoft Threat Intelligence Podcast

A blue and white icon showing a paper inside an envelope with the text New.

Get the CISO Digest

Stay ahead with expert insights, industry trends, and security research in this bimonthly email series.
  1. [1]
    IDC Info Snapshot, sponsored by Microsoft, 1.3 Billion AI Agents by 2028, #US53361825 and May 2025
  2. [2]
    Microsoft Digital Defense Report, October 2025

Follow Microsoft Security