Andrey Kolobov viewing code on a large screen with a robotic hand picking up a box in the background

Physical AI research

Developing intelligent systems that perform complex tasks through understanding and engaging with physical environments

What is physical AI?

Physical AI systems interact with and learn from the physical world through sensory inputs and actions, using robotic tools to perceive, navigate, and interact with their environment.

Our mission is to accelerate and advance research to develop AI agents that can: 

  1. Learn by interacting with the real world
  2. Adapt to dynamic environments through trial and error
  3. Transfer knowledge in physical spaces
  4. Translate perceptions into actions for completing everyday tasks like opening doors, picking up objects, or navigating around obstacles

Robots are typically machines equipped with actuators designed to execute specific tasks. Physical AI is inherently interdisciplinary, involving robotic control, reinforcement learning, spatial awareness, human-robot interaction, reasoning, and more. Given this complexity, no single organization can cover all aspects of its development alone. We look forward to collaborating with local industry, academia, and institutions across the globe, leveraging their expertise alongside our strengths in AI to advance the field responsibly.

“The emergence of vision-language-action (VLA) models for physical systems is enabling systems to perceive, reason, and act with increasing autonomy alongside humans in environments that are far less structured.”

– Ashley Llorens, CVP & Managing Director Microsoft Research Accelerator

Advancing physical AI

We are enabling AI systems that integrate perception, reasoning, and control, enabling adaptive and autonomous interaction in dynamic environments.

Researcher working on a computer screen with two robotic arms suspended from a frame with a large screen of code behind