Banner Image

Microsoft 365 Defender Security Research Group

Machine learning and AI Innovation at Microsoft Security Research

The cybersecurity landscape has fundamentally changed, as evidenced by diverse, large-scale, and complex attacks in the recent past. Adversaries have grown in volume, velocity and sophistication, and repeatedly disrupted computer systems controlling important pieces of infrastructure. Microsoft is in a unique position in the security space given its scale and coverage of security signals across the entire digital estate. This enables us to track and prevent adversarial activities across the security killchain. This is the foundation upon which we’re building the teams, tools, analytics, models, and research to responsibly use this data to protect users around the world.


The Microsoft 365 Defender Research group sits at the core of this. The group leverages applied research, threat intelligence, and security expertise to fuel the technologies behind Microsoft 365 Defender that protects customers globally across endpoints, email  and collaboration, identities, and cloud apps. The group ideates, experiments, and ships technologies that encompass a variety of research areas including weak supervision, natural language processing, graph representation learning, unsupervised learning, causal inference, Bayesian optimization, privacy preserving machine learning, computer vision, and economic theory. The group’s research can be categorized into the following end applications: Prevention, detection, investigation and remediation, threat intelligence, active and adaptive defense.

Prevention encompasses research to reduce the overall attack surface across user identities, endpoints, cloud apps and user data and to effectively block known and unknown threats. Timely identification and accurate tagging at scale are key to limiting potential attack vectors for an adversary to exploit. The group leverages Microsoft’s unique perspective on enterprise security to predict customers’ residual security risk and to understand its drivers, translating what is often a technical security conversation to the language of business decisions. The group also leverages a plethora of diverse techniques to proactively protect customers against attack in the email, endpoint, cloud, and web spaces.. Emerging privacy preserving machine learning approaches like federated learning, homomorphic encryption, and differential privacy may enable Microsoft to continue providing effective protection without compromising user trust and privacy.

Detection refers to identifying and alerting suspicious behaviors as they happen and responding to them to identify the scale and scope of an attack, thwart the attacker’s entry, and fully remediate any footholds the attacker might have. The key challenge here is to find the right balance between providing enough coverage through security alerts (recall) vs. reducing false alarms (precision). Most organizations that prioritize cybersecurity run a security operations center team 24/7. Still, there are commonly far more alerts to analyze than analyst cycles to triage them. Alert triage and correlation, incident (group of related alerts) prioritization, and campaign discovery are key areas of research for the group. The group is also exploring ideas to semi-automate this triage process by modeling to predict actions an analyst might take based on previous responses in similar scenarios. M365-Detection

Investigation and remediation assume that a breach has already occurred. The primary goal here is to provide customers with a holistic understanding of a security incident including the extent of the breach, such as which devices and data were impacted, how the propagated through the customer environment, and threat attribution. Gathering this data from telemetry sources is time consuming and tedious. The group is exploring natural language generation to automate the threat report generation process.

Threat intelligence enables security researchers to stay on top of current threat landscape by tracking active malicious actors – at times deliberately engaging with them and studying their behavior. The group is actively tracking 40+ active nation-state actors and 140+ threat groups representing 20 countries. Research challenges include identifying and tagging entities from multiple feeds of unstructured security data, learning high-level relationships and interactions between these entities and mapping them into the ability to identify similarities across different campaigns for better threat attribution. The group has published some work under Automated Open-Source Threat Intelligence Gathering and Management and is experimenting to evaluate potential benefits of leveraging large language models in this space.

Security tends to generalize based on past observations, and thus is biased towards what we know about attacks. It is a challenge to build an active and adaptive defense that can identify and protect from attacks that use new techniques or approaches. The group has done some initial research with the ultimate vision around building autonomous defense systems that learns both offensive and defensive behaviors in an unbiased manner. This technique can be used to uncover novel ways to attacks in a simulated enterprise network and also build defense systems that adapt to these attacks.

The table below highlights some of the research areas being explored for each of the security capabilities, but is in no way a complete representation of all the research within the group.


Research Areas


  • Weak supervision, few shot learning
  • Unsupervised learning
  • NLP: Language modelling, named entity recognition
  • Generative modeling
  • Graph methods: spectral embedding, graph matching, graph neural nets, graph representation learning
  • Computer vision
  • Multimodal
  • Bayesian optimization
  • Privacy preserving machine learning
  • Economic theory, risk quantification


  • Representation learning
  • Clustering and correlation 
  • Graph methods: spectral embedding, graph matching, graph neural nets, graph representation learning
  • Causal inference
  • Reinforcement learning
  • Statistical Modeling

Investigation and remediation

  • Language modelling, natural language generation
  • Bayesian statistical modeling
  • Graph methods: spectral embedding, graph matching, graph neural nets, graph representation learning

Threat Intelligence

  • NLP: Text summarization, named entity recognition, natural language understanding
  • Graph methods: spectral embedding, graph matching, graph neural nets, graph representation learning
  • Language to code generation
  • Intelligent search and correlation 

Active and adaptive defense

  • Reinforcement learning
  • Contextual bandits
  • Generative modelling
  • Graph methods: spectral embedding, graph matching, graph neural nets, graph representation learning
  • Responsible AI