What is physical AI?
Physical AI systems interact with and learn from the physical world through sensory inputs and actions, using robotic tools to perceive, navigate, and interact with their environment.
Our mission is to accelerate and advance research to develop AI agents that can:
- Learn by interacting with the real world
- Adapt to dynamic environments through trial and error
- Transfer knowledge in physical spaces
- Translate perceptions into actions for completing everyday tasks like opening doors, picking up objects, or navigating around obstacles
Robots are typically machines equipped with actuators designed to execute specific tasks. Physical AI is inherently interdisciplinary, involving robotic control, reinforcement learning, spatial awareness, human-robot interaction, reasoning, and more. Given this complexity, no single organization can cover all aspects of its development alone. We look forward to collaborating with local industry, academia, and institutions across the globe, leveraging their expertise alongside our strengths in AI to advance the field responsibly.
“The emergence of vision-language-action (VLA) models for physical systems is enabling systems to perceive, reason, and act with increasing autonomy alongside humans in environments that are far less structured.”
– Ashley Llorens, CVP & Managing Director Microsoft Research Accelerator
Advancing physical AI
We are enabling AI systems that integrate perception, reasoning, and control, enabling adaptive and autonomous interaction in dynamic environments.
