Phi-Ground Tech Report: Advancing Perception in GUI Grounding

MSR-TR-2025-63 |

Published by Microsoft

With the development of multimodal reasoning models, Computer Use Agents (CUAs), akin to Jarvis from “Iron Man”, are becoming a reality. GUI grounding is a core component for CUAs to execute actual actions, similar to mechanical control in robotics, and it directly leads to the success or failure of the system. It determines actions such as clicking and typing, as well as related parameters like the coordinates for clicks. Current end-to-end grounding models still achieve less than \(65%\) accuracy on challenging benchmarks like ScreenSpot-pro and UI-Vision, indicating they are far from being ready for deployment. In this work, we conduct an empirical study on the training of grounding models, examining details from data collection to model training. Ultimately, we developed the \(\textbf{Phi-Ground}\) model family, which achieves state-of-the-art performance across all five grounding benchmarks for models under \(10B\) parameters in agent settings. In the end-to-end model setting, our model still achieves SOTA results with scores of \(43.2\) on ScreenSpot-pro and \(27.2\) on UI-Vision. We believe that the various details discussed in this paper, along with our successes and failures, not only clarify the construction of grounding models but also benefit other perception tasks. Project homepage: https://zhangmiaosen2000.github.io/Phi-Ground/ (opens in new tab)

Related Tools

Phi-4

Phi-4-multimodal and Phi-4-mini, the newest models in Microsoft’s Phi family of small language models (SLMs) are now available. These models are designed to empower developers with advanced AI capabilities. Phi-4-multimodal, with its ability to process speech, vision, and text simultaneously, opens new possibilities for creating innovative and context-aware applications. Phi-4-mini, on the other hand, excels in text-based tasks, providing high accuracy and scalability in a compact form.