“AARI enables bold, ambitious ideas that will advance our understanding of the foundations of agentic AI, foster real-world innovation, and embed safety and responsibility at the heart of AI Agent development.”
— Evelyne Viegas, Senior Director, Microsoft Research
AI continues to evolve at an unprecedented pace, unlocking transformative opportunities across disciplines. Through the Accelerating Foundation Model Research (AFMR) program, Microsoft laid essential groundwork—establishing a global research network and enabling academic access to industry-scale foundation models.
AFMR represents a meaningful step forward in foundation model research, fostering a wave of open, cross-disciplinary exploration. From advancing model alignment with human values to accelerating scientific discovery, AFMR projects help shape how we understand AI’s role in society. Many of these contributions continue to thrive through our academic partners’ open-source repositories and ongoing collaborations.
Now, Microsoft Research is building on that momentum. The (AARI) initiative expands this vision—focusing on the next frontier of agentic systems through Grand Challenges with the academic community and Microsoft Research.
AARI is not a new beginning, but a natural evolution of the foundation AFMR laid. It extends the same spirit of openness, collaboration, and responsible innovation to a new era of intelligent systems—AI agents capable of reasoning, planning, and acting safely in complex environments.
In pursuit of this mission, AARI focuses on three goals;
Driving Innovation for High-Value Applications
Translate research breakthroughs into real-world impact by co-innovating with academic experts across diverse domains, such as education, heritage conservation, and space.
Understanding the Foundations of Agentic AI
Advance fundamental research into how agentic systems operate in both physical and virtual domains, emphasizing robustness and efficiency. This includes deepening our understanding of the roles that world knowledge, planning, and memory play in these systems.
Ensuring Safety and Responsibility
Develop frameworks and tools to ensure agentic AI is designed responsibly, for the benefit of all. This includes exploring how workers direct, collaborate with, or are directed by teams of AI agents in emerging “frontier firms.”
Together, AFMR and AARI represent a continuum of progress—from building the foundations of AI to shaping its future. By linking global research partnerships with cutting-edge technical inquiry, Microsoft Research continues to advance a responsible, collaborative path toward the next generation of intelligent systems.
Our projects
All of our projects have highly ambitious mission-driven goals to drive impact at scale.
-
Adaptive Agentic Robotic Systems
University of Washington: Abhishek Gupta
Microsoft Research: Andrey Kolobov -
Advancing Reasoning Capabilities in Agentic AI Systems
University of North Carolina Chapel Hill: Mohit Bansal
Microsoft Research: Akshay Nambi -
AgentGuard: Early-Warning and Routing for Predictable Agentic AI on Azure
Universitat Politècnica de València: Peter Romero
Microsoft Research: Haotian Li -
Agentic Verifiers: Provably Safe Test-time scaling for Reasoning Models
Indian Institute of Technology Kharagpur: Somak Aditya
Microsoft Research: Amit Sharma -
CuRA: Culture-Conditioned Routing for Safe Agentic AI
Singapore University: Roy Lee
Microsoft Research: Xiaoyuan Yi -
From Task Solvers to Teammates: A Theory-Grounded Architecture for Advancing Collaboration Readiness in LLM Agents
Northeastern University: Bingsheng Yao
Microsoft Reserach: Yun Wang -
Physics-Guided Vision-Language World Models for Agentic 4D Scene Understanding
Technical University of Munich: Benjamin Busam
Microsoft Research: Sarah Parisot -
Quantifying and Mitigating Emerging Risks in Multi-Agent Collaboration
Stanford University: Diyi Yang
Microsoft Research: Xing Xie -
Towards Autonomous and Reliable Supply Chains
Massachusetts Institute of Technology: David Simchi-Levi
Microsoft Research: Ishai Menache -
Towards Robust Generalization in Agentic AI via Environment Scaling
Columbia University: Zhou Yu
Microsoft Research: Baolin Peng -
Towards the Psychological Security of Agentic AI
Cornell University: Tanzeem Choudhury
Microsoft Research: Zilong Wang -
Visual episodic memory and use in agentic systems
University of Illinois Urbana Champaign: Derek Hoiem
Microsoft Research: Reuben Tan