{"id":1155843,"date":"2025-11-24T10:00:00","date_gmt":"2025-11-24T18:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/fara-7b-an-efficient-agentic-model-for-computer-use\/"},"modified":"2025-12-11T07:31:44","modified_gmt":"2025-12-11T15:31:44","slug":"fara-7b-an-efficient-agentic-model-for-computer-use","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/fara-7b-an-efficient-agentic-model-for-computer-use\/","title":{"rendered":"Fara-7B:\u00a0An Efficient Agentic Model for\u00a0Computer Use"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\" id=\"pushing-the-frontiers-of-computer-use-agents-with-an-open-weight-ultra-compact-model-optimized-for-real-world-web-tasks\">Pushing the frontiers of computer-use agents with an open-weight, ultra-compact model,&nbsp;optimized&nbsp;for real-world web tasks<\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2560\" height=\"1441\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-scaled.jpg\" alt=\"Three white line icons on a blue-to-green gradient background: a computer monitor with a globe symbol on the left, a cursor arrow with click lines in the center, and a computer mouse outline on the right.\" class=\"wp-image-1156197\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-scaled.jpg 2560w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-1536x865.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-2048x1153.jpg 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/><\/figure>\n\n\n\n<p>In 2024,&nbsp;Microsoft&nbsp;introduced small language models (SLMs) to customers, starting with the release of<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/azure.microsoft.com\/en-us\/products\/phi\" target=\"_blank\" rel=\"noopener noreferrer\"> Phi<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> models on <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/azure.microsoft.com\/en-us\/products\/ai-foundry\" target=\"_blank\" rel=\"noopener noreferrer\">Microsoft&nbsp;Foundry<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,&nbsp;as&nbsp;well as deploying&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/blogs.windows.com\/windowsexperience\/2024\/12\/06\/phi-silica-small-but-mighty-on-device-slm\/\" target=\"_blank\" rel=\"noopener noreferrer\">Phi Silica<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;on Copilot+&nbsp;PCs&nbsp;powered by Windows 11. Today, we are pleased to&nbsp;announce&nbsp;<strong>Fara-7B<\/strong>, our first&nbsp;<strong>agentic SLM<\/strong>&nbsp;designed specifically for&nbsp;computer&nbsp;use.<\/p>\n\n\n\n<p>Unlike traditional chat models that generate text-based responses, Computer&nbsp;Use Agent (CUA) models like Fara-7B&nbsp;leverage computer interfaces, such as a mouse&nbsp;and keyboard, to complete tasks on behalf of users. With only 7 billion parameters, Fara-7B&nbsp;achieves&nbsp;state-of-the-art&nbsp;performance within its size class and is competitive with larger, more resource-intensive agentic systems that depend on prompting multiple large models. Fara-7B\u2019s small size now&nbsp;makes it possible&nbsp;to&nbsp;run CUA models directly on devices. This results in reduced latency and improved privacy, as user data&nbsp;remains&nbsp;local.<\/p>\n\n\n\n<p>Fara-7B is an experimental release, designed to invite hands-on exploration and feedback from the community. Users can build and test agentic experiences beyond pure research\u2014automating everyday web tasks like filling out forms, searching for information, booking travel, or managing accounts. We recommend running Fara-7B in a sandboxed environment,&nbsp;monitoring&nbsp;its execution, and avoiding sensitive data or high-risk domains. Responsible use is&nbsp;essential&nbsp;as the model continues to evolve.<\/p>\n\n\n\n<p>Fara-7B&nbsp;operates by&nbsp;visually perceiving&nbsp;a webpage&nbsp;and&nbsp;takes&nbsp;actions like&nbsp;scrolling, typing, and clicking on directly predicted coordinates.&nbsp;It&nbsp;does not&nbsp;rely on&nbsp;separate models to parse the screen, nor on any additional information like&nbsp;accessibility trees,&nbsp;and&nbsp;thus&nbsp;uses the same modalities as humans to interact with the&nbsp;computer.&nbsp;To train Fara-7B, we developed a novel synthetic data generation pipeline&nbsp;for multi-step&nbsp;web tasks, building on our prior work (<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/orca-agentinstruct-agentic-flows-can-be-effective-synthetic-data-generators\/?utm_source=chatgpt.com\">AgentInstruct<\/a>).&nbsp;This data generation pipeline draws from&nbsp;real&nbsp;web pages and tasks&nbsp;sourced&nbsp;from human users.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n<\/div>\n\n\n\n<figure class=\"wp-block-video aligncenter\"><video height=\"1080\" style=\"aspect-ratio: 1920 \/ 1080;\" width=\"1920\" controls poster=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara_xbox_multi_turn-3.jpg\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/fara_xbox_multi_turn-3.mp4\"><\/video><figcaption class=\"wp-element-caption\">Video 1: A demo of a shopping scenario with Fara-7B through Magentic-UI. Fara-7B is asked to purchase an X-Box Spongebob controller. Fara-7B goes on to complete this task, but while doing so, also stops at every Critical Point to get input and approval from the user before proceeding.<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-video aligncenter\"><video height=\"1080\" style=\"aspect-ratio: 1440 \/ 1080;\" width=\"1440\" controls poster=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara_github_demo.jpg\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/fara_github_demo.mp4\"><\/video><figcaption class=\"wp-element-caption\">Video 2: A demo of Fara-7B finding relevant information online and summarizing it through Magentic-UI. We ask Fara-7B to find and summarize the latest three issues on Github Microsoft\/Magentic-UI.<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-video aligncenter\"><video height=\"1080\" style=\"aspect-ratio: 1920 \/ 1080;\" width=\"1920\" controls poster=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara_driving-directions-cheese.jpg\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/driving_directions_cheese-1_revised.mp4\"><\/video><figcaption class=\"wp-element-caption\">Video 3: A demo of how Fara-7B can use different tools to find relevant information and analyze it through Magentic-UI. We ask Fara-7B to find driving time between two places, and suggest a cheese place near the location. Fara-7B uses Bing Maps to find Driving time, and Bing search to find relevant information.<\/figcaption><\/figure>\n\n\n\n<p>Fara-7B exhibits&nbsp;strong performance&nbsp;compared to existing models&nbsp;across&nbsp;a diverse set of benchmarks.&nbsp;This includes both existing benchmarks as well as new&nbsp;evaluations&nbsp;we are&nbsp;releasing&nbsp;which&nbsp;cover useful&nbsp;task&nbsp;segments that are underrepresented in common benchmarks, such as&nbsp;finding job postings&nbsp;and&nbsp;comparing prices across retailers. While Fara-7B demonstrates strong benchmark results, even against much larger models, it shares many of their limitations, including challenges with accuracy on more complex tasks, mistakes in following instructions, and susceptibility to hallucinations.&nbsp;These are active areas of research, and&nbsp;we\u2019re&nbsp;committed to ongoing improvements as we learn from real-world use.<\/p>\n\n\n\n<p>Fara-7B is now available on\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/ai.azure.com\/explore\/models\/Fara-7B\/version\/1\/registry\/azureml-msr?tid=72f988bf-86f1-41af-91ab-2d7cd011db47\" target=\"_blank\" rel=\"noopener noreferrer\">Microsoft Foundry<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0and\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/huggingface.co\/microsoft\/Fara-7B\" target=\"_blank\" rel=\"noopener noreferrer\">Hugging Face<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0under an MIT license and is integrated with\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/labs.ai.azure.com\/projects\/magnetic-ui\/\" target=\"_blank\" rel=\"noopener noreferrer\">Magentic-UI, a research prototype from Microsoft Research AI Frontiers<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. We are also sharing a quantized and silicon-optimized version of Fara-7B, is available to install and run on\u00a0Copilot+ PCs powered by Windows 11, for turnkey experimentation.\u00a0The\u00a0community\u00a0can simply download the pre-optimized model and run it in their environment.<\/p>\n\n\n\n<p>By making Fara-7B open-weight, we aim to lower the barrier&nbsp;to experimenting&nbsp;with&nbsp;and improving&nbsp;CUA technology for automating routine web tasks, such as searching for information,&nbsp;shopping,&nbsp;and&nbsp;booking reservations.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"19108\" height=\"11897\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/model_accuracy_vs_cost_v2-1-1.png\" alt=\"Figure\u00a01:\u00a0Comparing\u00a0WebVoyager\u00a0accuracy and cost\u00a0of\u00a0Fara-7B to other\u00a0computer\u00a0use agents (CUAs)\u00a0or agents that prompt LLMs with accessibility trees (SoM\u00a0Agent w\/ Ax Tree).\u00a0Cost is computed\u00a0by\u00a0multiplying\u00a0the\u00a0average\u00a0number of\u00a0input\u00a0and\u00a0output tokens\u00a0each model\u00a0consumes\u00a0by\u00a0price per token.\u00a0Both\u00a0Fara-7B and UI-TARS-1.5-7B\u00a0are based\u00a0on\u00a0Qwen-2.5-VL-7B,\u00a0for which the\u00a0lowest\u00a0inference price\u00a0from\u00a0\u00a0https:\/\/openrouter.ai\/\u00a0\u00a0is\u00a0\\(0.2\/\\)0.2\u00a0per 1M\u00a0input\/output\u00a0tokens.\u00a0Even though both models are priced equally, Fara-7B is more\u00a0efficient,\u00a0completing tasks\u00a0with\u00a0only\u00a0~16\u00a0steps on\u00a0average\u00a0compared\u00a0to\u00a0~41\u00a0for UI-TARS-1.5-7B.\u00a0OpenAI computer-use-preview accessed November 2025 via the Responses API.\" class=\"wp-image-1156353\"\/><figcaption class=\"wp-element-caption\"><em><em>Figure&nbsp;1:&nbsp;Comparing&nbsp;WebVoyager&nbsp;accuracy and cost&nbsp;of&nbsp;Fara-7B to other&nbsp;computer&nbsp;use agents (CUAs)&nbsp;or agents that prompt LLMs with accessibility trees (SoM&nbsp;Agent w\/ Ax Tree).&nbsp;Cost is computed&nbsp;by&nbsp;multiplying&nbsp;the&nbsp;average&nbsp;number of&nbsp;input&nbsp;and&nbsp;output tokens&nbsp;each model&nbsp;consumes&nbsp;by&nbsp;price per token.&nbsp;Both&nbsp;Fara-7B and UI-TARS-1.5-7B&nbsp;are based&nbsp;on&nbsp;Qwen-2.5-VL-7B,&nbsp;for which the&nbsp;lowest&nbsp;inference price&nbsp;from&nbsp;<\/em><a href=\"https:\/\/openrouter.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>https:\/\/openrouter.ai\/<\/em><\/a><em>&nbsp;&nbsp;is&nbsp;\\(0.2\/\\)0.2&nbsp;per 1M&nbsp;input\/output&nbsp;tokens.&nbsp;Even though both models are priced equally, Fara-7B is more&nbsp;efficient,&nbsp;completing tasks&nbsp;with&nbsp;only&nbsp;~16&nbsp;steps on&nbsp;average&nbsp;compared&nbsp;to&nbsp;~41&nbsp;for UI-TARS-1.5-7B.&nbsp;OpenAI computer-use-preview accessed November 2025 via the Responses API.<\/em><\/em><\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"developing-fara-7b\">Developing Fara-7B<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"cua-multi-agent-synthetic-data-generation\">CUA multi-agent synthetic data generation<\/h3>\n\n\n\n<p>A key bottleneck&nbsp;for&nbsp;building CUA models is a lack of large-scale, high-quality&nbsp;computer interaction data. Collecting such data with&nbsp;human annotators&nbsp;is prohibitively expensive as a single&nbsp;CUA task can involve&nbsp;dozens&nbsp;of steps,&nbsp;each of which&nbsp;needs to be&nbsp;annotated.&nbsp;Our&nbsp;data generation pipeline&nbsp;(Figure 2)&nbsp;avoids manual annotation and instead relies on scalable synthetic data sourced from&nbsp;publicly&nbsp;available websites&nbsp;and&nbsp;custom&nbsp;task prompts.&nbsp;We build this&nbsp;pipeline&nbsp;on top of&nbsp;the&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/articles\/magentic-one-a-generalist-multi-agent-system-for-solving-complex-tasks\/\">Magentic-One<\/a>&nbsp;framework, and it involves three main stages:&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2560\" height=\"1349\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-2-scaled.png\" alt=\"Figure\u00a02:\u00a0Data Generation workflow from proposing tasks from various seeds like URLs\u00a0to\u00a0solving\u00a0those tasks with\u00a0the\u00a0Magentic-One multi-agent framework to generate demonstrations for training, and finally\u00a0verifiying\/filtering\u00a0completed\u00a0trajectories\" class=\"wp-image-1155974\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-2-scaled.png 2560w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-2-300x158.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-2-1024x539.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-2-768x405.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-2-1536x809.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-2-2048x1079.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-2-240x126.png 240w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/><figcaption class=\"wp-element-caption\">Figure&nbsp;2:<em>&nbsp;Data Generation workflow from proposing tasks from various seeds like URLs&nbsp;to&nbsp;solving&nbsp;those tasks with&nbsp;the&nbsp;Magentic-One multi-agent framework to generate demonstrations for training, and finally&nbsp;verifiying\/filtering&nbsp;completed&nbsp;trajectories<\/em><\/figcaption><\/figure>\n\n\n\n<p><strong>Task Proposal.<\/strong>&nbsp;We generate a broad set of synthetic tasks that mirror common user activities on the&nbsp;web.&nbsp;To ensure coverage and diversity, tasks are&nbsp;&#8220;seeded&#8221; by&nbsp;a&nbsp;web&nbsp;index of public URLs&nbsp;classified into various categories e.g., shopping, travel, restaurants, etc. This enables&nbsp;task&nbsp;generation&nbsp;targeting&nbsp;a particular skill, like \u201cbook 2 tickets to see the Downton Abbey Grand Finale at AMC Union Square, NYC.\u201d&nbsp;from a&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.fandango.com\/downton-abbey-the-grand-finale-2025-236926\/movie-overview\" target=\"_blank\" rel=\"noopener noreferrer\">URL like this<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;classified as \u201cmovies\u201d.&nbsp;&nbsp;As another strategy, we devised a way&nbsp;to generate tasks from&nbsp;randomly&nbsp;sampled&nbsp;URLs.&nbsp;Each task starts with a general prompt and is iteratively refined as an&nbsp;LLM&nbsp;agent explores the website and gathers&nbsp;more information about it. We are releasing a held-out subset of these tasks as a benchmark (\u201c<strong>WebTailBench<\/strong>\u201d), described in the Evaluation section below.&nbsp;<\/p>\n\n\n\n<p><strong>Task&nbsp;Solving.<\/strong>&nbsp;Once synthetic tasks are generated, a multi-agent system built on&nbsp;Magentic-One&nbsp;attempts&nbsp;to&nbsp;complete&nbsp;them to generate demonstrations for supervised finetuning. The multi-agent system uses an&nbsp;Orchestrator&nbsp;agent to create a plan and direct a&nbsp;WebSurfer&nbsp;agent to take browser actions and reports results. The Orchestrator monitors progress, updating plans as needed, and can end tasks or engage a<em> <\/em>UserSimulator agent if user input is&nbsp;required, allowing for multi-turn completion.&nbsp;Each&nbsp;task and corresponding sequence of observations, actions, and agent thoughts&nbsp;forms&nbsp;a&nbsp;\u201ctrajectory\u201d.<\/p>\n\n\n\n<p><strong>Trajectory Verification.<\/strong> Before using any tasks for training, three verifier agents evaluate if a task was \u201csuccessful\u201d: The Alignment Verifier checks if the trajectory of actions match the task\u2019s intent; the Rubric Verifier defines completion criteria and scores the trajectory against them; and the Multimodal Verifier reviews screenshots and responses to confirm visual evidence supports successful completion. Trajectories failing these standards are removed.<\/p>\n\n\n\n<p>We&nbsp;ultimately&nbsp;train&nbsp;this version&nbsp;of&nbsp;Fara-7B&nbsp;on a dataset of&nbsp;145,000&nbsp;trajectories&nbsp;consisting of&nbsp;1&nbsp;million&nbsp;steps&nbsp;covering diverse websites, task types, and difficulty levels.&nbsp;Additionally, we include&nbsp;training&nbsp;data for several auxiliary tasks, including&nbsp;grounding for&nbsp;accurate&nbsp;UI element localization, captioning, and visual question answering.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"training-fara-7b\">Training Fara-7B<\/h3>\n\n\n\n<p>Using&nbsp;one&nbsp;compute use&nbsp;model&nbsp;is&nbsp;easier than&nbsp;a&nbsp;multi-agent system, particularly when it comes to&nbsp;deployment. Therefore, we&nbsp;distill the complexities of&nbsp;our multi-agent&nbsp;solving system into a single model&nbsp;that can&nbsp;execute tasks.&nbsp;Fara-7B&nbsp;is a proof-of-concept that small models can&nbsp;effectively&nbsp;learn from complex, multi-agent systems&nbsp;with lots of bells and whistles.<\/p>\n\n\n\n<p>As shown in Figure 3, Fara-7B is trained to execute user tasks by perceiving only browser window screenshots (without relying on accessibility trees), and predicting single-step actions. For each step, the context used to make its prediction contains all user messages, the complete action history, and the latest three screenshots.<\/p>\n\n\n\n<p>In its prediction,&nbsp;Fara-7B&nbsp;outputs a reasoning message (\u201cthinking\u201d about the next action) followed by a tool call. The available tools include standard&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/playwright.dev\/python\/docs\/intro\" target=\"_blank\" rel=\"noopener noreferrer\">Playwright<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;mouse and keyboard actions, such as&nbsp;click(x,y)&nbsp;and&nbsp;type(), and browser-specific macro-actions like&nbsp;web_search()&nbsp;and&nbsp;visit_url().<\/p>\n\n\n\n<p>Fara-7B uses&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/2502.13923\" target=\"_blank\" rel=\"noopener noreferrer\">Qwen2.5-VL-7B<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;as its base model due to its&nbsp;strong performance&nbsp;on grounding tasks and its ability to support long contexts (up to 128k tokens).&nbsp;We&nbsp;linearize the solving pipeline\u2019s&nbsp;trajectories&nbsp;into a sequence of \u201cobserve-think-act\u201d steps&nbsp;that are suitable for training with supervised finetuning loss.&nbsp;We did not use reinforcement learning to achieve&nbsp;the&nbsp;results&nbsp;we report below.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2560\" height=\"864\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-3-scaled.png\" alt=\"Figure\u00a03:\u00a0Operation of Fara-7B as a standalone, native computer use agent\u00a0running on-device. Because Fara-7B is small, and none of its context needs to leave your personal device, it paves the way for personal and private agentic computing\" class=\"wp-image-1155975\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-3-scaled.png 2560w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-3-300x101.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-3-1024x346.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-3-768x259.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-3-1536x519.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-3-2048x691.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Figure-3-240x81.png 240w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/><figcaption class=\"wp-element-caption\">Figure&nbsp;3:<em>&nbsp;Operation of Fara-7B as a standalone, native computer use agent&nbsp;running on-device. Because Fara-7B is small, and none of its context needs to leave your personal device, it paves the way for personal and private agentic computing<\/em><\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"evaluations\">Evaluations<\/h2>\n\n\n\n<p>We evaluate Fara-7B and comparable baselines on canonical public benchmarks including<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/2401.13919\" target=\"_blank\" rel=\"noopener noreferrer\"> WebVoyager<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/2504.01382\" target=\"_blank\" rel=\"noopener noreferrer\"> Online-Mind2Web<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/2506.02839\" target=\"_blank\" rel=\"noopener noreferrer\"> Deepshop<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, as well as a new benchmark we developed named<strong> WebTailBench<\/strong>, specifically focusing on 11 real-world task types underrepresented or missing in existing benchmarks like booking movie\/event tickets, restaurant reservations, comparing prices across retailers,&nbsp;applying for jobs,&nbsp;finding real estate, and more complex multi-step tasks.<\/p>\n\n\n\n<p>Evaluation of&nbsp;web&nbsp;agents can be tricky&nbsp;because the web is constantly&nbsp;changing,&nbsp;and many websites even block detected bots,&nbsp;which is why we&nbsp;developed&nbsp;a&nbsp;test&nbsp;harness&nbsp;that&nbsp;relies on&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.browserbase.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">Browserbase<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;to standardize how browser sessions are managed.&nbsp;In Table 1 below, we report a notion of task success rate&nbsp;(%)&nbsp;defined&nbsp;by each benchmark\u2019s official&nbsp;LLM-as-judge evaluator;&nbsp;WebTailBench&nbsp;success is&nbsp;computed using the same Task Verification pipeline that filtered&nbsp;our&nbsp;training data.&nbsp;We find that&nbsp;Fara-7B&nbsp;is&nbsp;state-of-the-art,&nbsp;even&nbsp;outperforming native computer use&nbsp;agents&nbsp;like UI-TARS-1.5-7B, or much larger&nbsp;models&nbsp;like GPT-4o prompted to act like a computer use agent with&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/pdf\/2310.11441\" target=\"_blank\" rel=\"noopener noreferrer\">Set-Of-Marks<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;(SoM&nbsp;Agent).&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-table aligncenter\"><table class=\"has-fixed-layout\"><thead><tr><th colspan=\"2\"><\/th><th>WebVoyager<\/th><th>Online-Mind2Web<\/th><th>DeepShop<\/th><th>WebTailBench&nbsp;&nbsp;<\/th><\/tr><\/thead><tbody><tr><td rowspan=\"2\">SoM&nbsp;Agents&nbsp;<\/td><td>SoM&nbsp;Agent (GPT-4o)&nbsp;<\/td><td>65.1&nbsp;<\/td><td>34.6&nbsp;<\/td><td>16.0&nbsp;<\/td><td>30.0&nbsp;<\/td><\/tr><tr><td>GLM-4.1V-9B-Thinking&nbsp;<\/td><td>66.8&nbsp;&nbsp;<\/td><td>33.9&nbsp;<\/td><td>32.0&nbsp;<\/td><td>22.4&nbsp;<\/td><\/tr><tr><td rowspan=\"3\">Computer Use Models&nbsp;<\/td><td>OpenAI&nbsp;computer-use-preview&nbsp;&nbsp;<\/td><td>70.9&nbsp;<\/td><td>42.9&nbsp;<\/td><td>24.7&nbsp;<\/td><td>25.7&nbsp;<\/td><\/tr><tr><td>UI-TARS-1.5-7B&nbsp;<\/td><td>66.4&nbsp;&nbsp;<\/td><td>31.3&nbsp;<\/td><td>11.6&nbsp;<\/td><td>19.5&nbsp;<\/td><\/tr><tr><td><strong>Fara-7B&nbsp;<\/strong><\/td><td><strong>73.5&nbsp;<\/strong><\/td><td><strong>34.1&nbsp;<\/strong><\/td><td><strong>26.2<\/strong>&nbsp;<\/td><td><strong>38.4<\/strong>&nbsp;<\/td><\/tr><\/tbody><\/table><figcaption class=\"wp-element-caption\">Table 1:&nbsp;<em>Performance comparison across four web benchmarks:&nbsp;WebVoyager, Online-Mind2Web,&nbsp;DeepShop, and&nbsp;our&nbsp;newly introduced WebTailBench.&nbsp;Results are reported as&nbsp;Task Succes Rate \/ Accuracy&nbsp;(%) and are averaged over 3 runs.&nbsp;OpenAI computer-use-preview accessed November 2025 via the Responses API.<\/em><\/figcaption><\/figure>\n\n\n\n<p>In Figure 1, we&nbsp;expand on&nbsp;the&nbsp;Webvoyager&nbsp;results by giving each model up to three chances to complete a task, and report &#8220;pass@K&#8221;. We also consider&nbsp;on the x-axis the&nbsp;cost of running each model if one were to pay market rates for input\/output tokens consumed. Fara-7B breaks ground on a new pareto frontier, showing that on-device computer use agents are approaching the capabilities of frontier models.<\/p>\n\n\n\n<p>We partnered with a trusted external group,&nbsp;Browserbase, to independently evaluate Fara-7B using human annotators. The model achieved&nbsp;<strong>62%<\/strong> on&nbsp;WebVoyager (see detailed reports in&nbsp;Browserbase&nbsp;blog&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/browserbase.com\/blog\/training-computer-use-models-in-the-real-world-with-microsoft\" target=\"_blank\" rel=\"noopener noreferrer\">here<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>). These results were generated in the same environment with identical settings and human verification of each task, making them directly comparable. Note that&nbsp;Browserbase\u2019s&nbsp;standard&nbsp;WebVoyager&nbsp;scores do not use retries when&nbsp;environment&nbsp;errors occur; the results referenced here include retries and should not be compared directly to the non-retry scores. Going forward, we are collaborating with&nbsp;Browserbase&nbsp;to host&nbsp;WebTailBench&nbsp;human evaluations to help the community build reliable and reproducible assessments for computer use agents.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"safety\">Safety<\/h3>\n\n\n\n<p>Agents capable of operating computers present challenges&nbsp;distinct&nbsp;from&nbsp;chat-only models,&nbsp;including new&nbsp;outlets of&nbsp;user&nbsp;misuse, model misbehavior,&nbsp;and unintended&nbsp;consequences of&nbsp;actions,&nbsp;and&nbsp;external&nbsp;risks like prompt injections or online scams.&nbsp;CUAs&nbsp;take action with&nbsp;real-world consequences, so ensuring&nbsp;robust safety measures is essential to their responsible deployment.&nbsp;Transparency and user control sit at the core of Fara-7B\u2019s design. Although we have incorporated several safety measures, Fara-7B&nbsp;remains&nbsp;a research preview, and we continue to advance our approach to safety for computer use agents, an active area of work across the entire AI community.&nbsp;<\/p>\n\n\n\n<p>Fara-7B processes browser screenshots, user task instructions, and a history of actions taken during each session and collects only what is necessary to complete the user\u2019s requested task. No&nbsp;additional&nbsp;site data\u2014such as&nbsp;accessibility&nbsp;trees or external scaffolding\u2014is accessed; Fara-7B interacts with the computer in the same way a human would, relying solely on what is visible on the screen.<\/p>\n\n\n\n<p>All actions taken by the agent are logged and auditable, allowing users to review and&nbsp;monitor&nbsp;every step.&nbsp;&nbsp;For added safety, Fara\u20117B is intended to run in sandboxed environments, giving users full oversight and the ability to intervene or halt&nbsp;actions at any time. These safeguards ensure that privacy, transparency, and user control remain at the core of every interaction.<\/p>\n\n\n\n<p>To&nbsp;address&nbsp;misuse, we trained Fara-7B on a mixture of public safety data and internally generated tasks that it&nbsp;ought to refuse&nbsp;based on&nbsp;Microsoft\u2019s Responsible AI Policy.&nbsp;We evaluated&nbsp;Fara-7B\u2019s ability to refuse harmful tasks&nbsp;on&nbsp;<strong>WebTailBench-Refusals<\/strong>&nbsp;which consists of&nbsp;111 red-teaming tasks&nbsp;showing a high refusal rate&nbsp;of 82%.&nbsp;The&nbsp;model&nbsp;also&nbsp;underwent&nbsp;Microsoft\u2019s&nbsp;rigorous&nbsp;red teaming process, where we focused on the model rejecting harmful tasks and risky tasks, such as harmful content, jailbreaking attempts, ungrounded&nbsp;responses,&nbsp;and prompt injections. For further details, check out our <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/aka.ms\/fara-techreport\">technical report<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<p>To mitigate the risk of Fara-7B taking unintended actions,&nbsp;all of&nbsp;Fara-7B\u2019s&nbsp;training data enforces both recognizing and stopping at \u201cCritical Points\u201d when executing a task. A Critical Point&nbsp;(see&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/cdn.openai.com\/operator_system_card.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Operator System Card<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>)&nbsp;is any situation that requires the user&#8217;s personal data or consent before engaging in a transaction or irreversible action like sending an email. Upon reaching a Critical Point, Fara-7B&nbsp;should&nbsp;respond by informing the&nbsp;user&nbsp;it&nbsp;cannot&nbsp;proceed&nbsp;without their consent.<\/p>\n\n\n\n<p>For guidance on how to use our model safely, and the security considerations to be mindful of when using our model, please refer to our&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/huggingface.co\/microsoft\/Fara-7B\">Model&nbsp;card<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"how-to-use\">How to use<\/h3>\n\n\n\n<p>Fara-7B&nbsp;is available on&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/huggingface.co\/microsoft\/Fara-7B\" target=\"_blank\" rel=\"noopener noreferrer\"><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/ai.azure.com\/explore\/models\/Fara-7B\/version\/1\/registry\/azureml-msr?tid=72f988bf-86f1-41af-91ab-2d7cd011db47\" target=\"_blank\" rel=\"noopener noreferrer\">Microsoft&nbsp;Foundry&nbsp;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>and<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/ai.azure.com\/explore\/models\/Fara-7B\/version\/1\/registry\/azureml-msr?tid=72f988bf-86f1-41af-91ab-2d7cd011db47\" target=\"_blank\" rel=\"noopener noreferrer\">&nbsp;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/huggingface.co\/microsoft\/Fara-7B\" target=\"_blank\" rel=\"noopener noreferrer\">Hugging Face<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.&nbsp;We are also releasing the implementation of Fara-7B&nbsp;in&nbsp;Magentic-UI,&nbsp;so that&nbsp;users&nbsp;can&nbsp;try&nbsp;it&nbsp;in a contained environment&nbsp;through the inference code provided. Additionally, users can download the model for Copilot+&nbsp;PCs&nbsp;powered by Windows 11&nbsp;from the&nbsp;AI&nbsp;Toolkit in VSCode and&nbsp;run it all on-device,&nbsp;taking advantage of&nbsp;NPU hardware acceleration.&nbsp;&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"looking-forward\">Looking forward<\/h3>\n\n\n\n<p>Our current release&nbsp;is an experimental CUA model&nbsp;that achieves&nbsp;state-of-the-art&nbsp;results for its size,&nbsp;purely using&nbsp;supervised fine-tuning.&nbsp;We believe even stronger CUA&nbsp;models capable of running on-device are possible&nbsp;through&nbsp;improved&nbsp;multimodal base models and through Reinforcement Learning&nbsp;on&nbsp;live and sandboxed environments.&nbsp;These early days&nbsp;are about learning from the community and driving real-world experimentation to shape what comes next.&nbsp;If&nbsp;you\u2019d&nbsp;like to join us and help shape the future of SLMs,&nbsp;please&nbsp;apply for <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/ai-frontiers\/opportunities\/\">open roles<\/a>.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"acknowledgements\">Acknowledgements:&nbsp;<\/h2>\n\n\n\n<p>We thank Gustavo de Rosa, Adam Fourney, Michael Harrison, Rafah Hosn, Neel Joshi, Ece Kamar, John Langford, Maya Murad, Sidhartha Sen, Pratyusha Sharma, and Lili Wu for their valuable help, insightful discussions, and continued support throughout this work.&nbsp;<\/p>\n\n\n\n<p>We also thank Pashmina Cameron, Karthik Vijayan, Vicente Rivera, Chris Dern, Sayan Shaw,&nbsp;Sunghoon&nbsp;Choi, Andrey&nbsp;Rybalchenko, and Vivek Pradeep for their efforts in making the model available on Copilot+ PCs through the AI Toolkit.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Fara-7B is our first agentic small language model for computer use. This experimental model includes robust safety measures to aid responsible deployment. Despite its size, Fara-7B holds its own against larger, more resource-intensive agentic systems.<\/p>\n","protected":false},"author":43518,"featured_media":1156197,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Ahmed Awadallah","user_id":"31979"},{"type":"user_nicename","value":"Akshay Nambi","user_id":"38169"},{"type":"user_nicename","value":"Alexey Taymanov","user_id":"37616"},{"type":"user_nicename","value":"Aravind Rajeswaran","user_id":"44035"},{"type":"user_nicename","value":"Corby Rosset","user_id":"41997"},{"type":"user_nicename","value":"Hussein Mozannar","user_id":"43671"},{"type":"user_nicename","value":"Spencer Whitehead","user_id":"44037"},{"type":"user_nicename","value":"Vibhav Vineet","user_id":"37751"},{"type":"user_nicename","value":"Yash Lara","user_id":"43341"},{"type":"user_nicename","value":"Yash Pandya","user_id":"44036"},{"type":"user_nicename","value":"Andrew Zhao","user_id":"44038"}],"msr_hide_image_in_river":null,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[269148,243984,269142,269145],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-1155843","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-approved-for-river","msr-post-option-blog-homepage-featured","msr-post-option-include-in-river","msr-post-option-pinned-for-river"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[992148],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Ahmed Awadallah","user_id":31979,"display_name":"Ahmed Awadallah","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hassanam\/\" aria-label=\"Visit the profile page for Ahmed Awadallah\">Ahmed Awadallah<\/a>","is_active":false,"last_first":"Awadallah, Ahmed","people_section":0,"alias":"hassanam"},{"type":"user_nicename","value":"Akshay Nambi","user_id":38169,"display_name":"Akshay Nambi","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/akshayn\/\" aria-label=\"Visit the profile page for Akshay Nambi\">Akshay Nambi<\/a>","is_active":false,"last_first":"Nambi, Akshay","people_section":0,"alias":"akshayn"},{"type":"user_nicename","value":"Alexey Taymanov","user_id":37616,"display_name":"Alexey Taymanov","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ataymano\/\" aria-label=\"Visit the profile page for Alexey Taymanov\">Alexey Taymanov<\/a>","is_active":false,"last_first":"Taymanov, Alexey","people_section":0,"alias":"ataymano"},{"type":"user_nicename","value":"Aravind Rajeswaran","user_id":44035,"display_name":"Aravind Rajeswaran","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/arrajeswaran\/\" aria-label=\"Visit the profile page for Aravind Rajeswaran\">Aravind Rajeswaran<\/a>","is_active":false,"last_first":"Rajeswaran, Aravind","people_section":0,"alias":"arrajeswaran"},{"type":"user_nicename","value":"Corby Rosset","user_id":41997,"display_name":"Corby Rosset","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/corbyrosset\/\" aria-label=\"Visit the profile page for Corby Rosset\">Corby Rosset<\/a>","is_active":false,"last_first":"Rosset, Corby","people_section":0,"alias":"corbyrosset"},{"type":"user_nicename","value":"Hussein Mozannar","user_id":43671,"display_name":"Hussein Mozannar","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hmozannar\/\" aria-label=\"Visit the profile page for Hussein Mozannar\">Hussein Mozannar<\/a>","is_active":false,"last_first":"Mozannar, Hussein","people_section":0,"alias":"hmozannar"},{"type":"user_nicename","value":"Spencer Whitehead","user_id":44037,"display_name":"Spencer Whitehead","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/spwhitehead\/\" aria-label=\"Visit the profile page for Spencer Whitehead\">Spencer Whitehead<\/a>","is_active":false,"last_first":"Whitehead, Spencer","people_section":0,"alias":"spwhitehead"},{"type":"user_nicename","value":"Vibhav Vineet","user_id":37751,"display_name":"Vibhav Vineet","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vivineet\/\" aria-label=\"Visit the profile page for Vibhav Vineet\">Vibhav Vineet<\/a>","is_active":false,"last_first":"Vineet, Vibhav","people_section":0,"alias":"vivineet"},{"type":"user_nicename","value":"Yash Lara","user_id":43341,"display_name":"Yash Lara","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yashlara\/\" aria-label=\"Visit the profile page for Yash Lara\">Yash Lara<\/a>","is_active":false,"last_first":"Lara, Yash","people_section":0,"alias":"yashlara"},{"type":"user_nicename","value":"Yash Pandya","user_id":44036,"display_name":"Yash Pandya","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yashpandya\/\" aria-label=\"Visit the profile page for Yash Pandya\">Yash Pandya<\/a>","is_active":false,"last_first":"Pandya, Yash","people_section":0,"alias":"yashpandya"},{"type":"user_nicename","value":"Andrew Zhao","user_id":44038,"display_name":"Andrew Zhao","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/andrewzhao\/\" aria-label=\"Visit the profile page for Andrew Zhao\">Andrew Zhao<\/a>","is_active":false,"last_first":"Zhao, Andrew","people_section":0,"alias":"andrewzhao"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-960x540.jpg\" class=\"img-object-cover\" alt=\"Three white line icons on a blue-to-green gradient background: a computer monitor with a globe symbol on the left, a cursor arrow with click lines in the center, and a computer mouse outline on the right.\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-1536x865.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-2048x1153.jpg 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Fara7B-BlogHeroFeature-1400x788_NEW-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"November 24, 2025","formattedExcerpt":"Fara-7B is our first agentic small language model for computer use. This experimental model includes robust safety measures to aid responsible deployment. Despite its size, Fara-7B holds its own against larger, more resource-intensive agentic systems.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1155843","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/43518"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1155843"}],"version-history":[{"count":117,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1155843\/revisions"}],"predecessor-version":[{"id":1158305,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1155843\/revisions\/1158305"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1156197"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1155843"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1155843"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1155843"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1155843"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1155843"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1155843"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1155843"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1155843"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1155843"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1155843"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1155843"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}