{"id":1168790,"date":"2026-04-22T09:25:30","date_gmt":"2026-04-22T16:25:30","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=1168790"},"modified":"2026-04-22T11:48:02","modified_gmt":"2026-04-22T18:48:02","slug":"autoadapt-automated-domain-adaptation-for-large-language-models","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/autoadapt-automated-domain-adaptation-for-large-language-models\/","title":{"rendered":"AutoAdapt: Automated domain adaptation for large language\u00a0models"},"content":{"rendered":"\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"788\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1.jpg\" alt=\"Three white line icons in a row; a document list, a workflow, and process wheel against a blue and purple gradient background.\" class=\"wp-image-1169326\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1.jpg 1400w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-1280x720.jpg 1280w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><\/figure>\n\n\n\n<div style=\"padding-bottom:0; padding-top:0\" class=\"wp-block-msr-immersive-section alignfull row wp-block-msr-immersive-section\">\n\t\n\t<div class=\"container\">\n\t\t<div class=\"wp-block-msr-immersive-section__inner wp-block-msr-immersive-section__inner--narrow\">\n\t\t\t<div class=\"wp-block-columns mb-10 pb-1 pr-1 is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\" style=\"box-shadow:var(--wp--preset--shadow--outlined)\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h2 class=\"wp-block-heading h3\" id=\"at-a-glance\">At a glance<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>:&nbsp;Adapting large language models to specialized, high-stakes domains is slow, expensive, and hard to reproduce.&nbsp;<\/li>\n\n\n\n<li><strong>What we built<\/strong>:&nbsp;AutoAdapt&nbsp;automates&nbsp;planning, strategy selection&nbsp;(e.g., RAG vs. fine-tuning), and tuning under real deployment constraints.&nbsp;<\/li>\n\n\n\n<li><strong>How it works<\/strong>:&nbsp;\u202fA structured configuration graph&nbsp;maps the&nbsp;full scope&nbsp;of the&nbsp;adaptation&nbsp;process,\u202fan agentic planner&nbsp;selects and sequences the right steps,\u202fand&nbsp;a&nbsp;budget-aware optimization loop (AutoRefine)&nbsp;refines the process within defined constraints.\u202f&nbsp;<\/li>\n\n\n\n<li><strong>Why it matters<\/strong>:&nbsp;The result is faster,&nbsp;automated,&nbsp;more reliable domain adaptation that turns weeks of&nbsp;manual&nbsp;iteration into repeatable pipelines.&nbsp;<\/li>\n<\/ul>\n<\/div>\n<\/div>\t\t<\/div>\n\t<\/div>\n\n\t<\/div>\n\n\n\n<p>Deploying large language models (LLMs) in real-world, high-stakes settings is harder than it should be. In high-stakes settings like law, medicine, and cloud incident response, performance and reliability can quickly break down because adapting models to domain-specific requirements is a slow and manual process that is difficult to reproduce.<\/p>\n\n\n\n<p>The core challenge is domain adaptation, which entails turning a general-purpose model into one that consistently follows domain rules, draws on the right knowledge, and meets constraints such as latency, privacy, and cost. Today, that process typically involves guesswork, choosing among approaches like retrieval-augmented generation (RAG) and fine-tuning, tuning hyperparameters, and iterating through evaluations with no clear path to a good outcome. An operations team responding to an outage can&#8217;t afford a model that drifts from domain requirements or a tuning process that takes weeks with no guarantee of a reproducible result.<\/p>\n\n\n\n<p>To tackle this, we&#8217;re pleased to introduce AutoAdapt. In our paper, \u201c<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/autoadapt-an-automated-domain-adaptation-framework-for-llms\/\" type=\"msr-research-item\" id=\"1163986\">AutoAdapt: An Automated Domain Adaptation Framework for Large Language Models<\/a>,\u201d we describe an end-to-end, constraint-aware framework for domain adaptation. Given a task objective, available domain data, and practical requirements like accuracy, latency, hardware, and budget, AutoAdapt plans a valid adaptation pipeline, selecting among approaches like RAG and multiple fine-tuning methods, and tunes key hyperparameters using a budget-aware refinement loop. The result is an executable, reproducible workflow for building domain-ready models more quickly and consistently, helping make LLMs dependable in real-world settings.<\/p>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"1144028\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">PODCAST SERIES<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/story\/the-ai-revolution-in-medicine-revisited\/\" aria-label=\"The AI Revolution in Medicine, Revisited\" data-bi-cN=\"The AI Revolution in Medicine, Revisited\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/06\/Episode7-PeterBillSebastien-AIRevolution_Hero_Feature_River_No_Text_1400x788.jpg\" alt=\"Illustrated headshot of Bill Gates, Peter Lee, and S\u00e9bastien Bubeck\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">The AI Revolution in Medicine, Revisited<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"the-ai-revolution-in-medicine-revisited\" class=\"large\">Join Microsoft\u2019s Peter Lee on a journey to discover how AI is impacting healthcare and what it means for the future of medicine.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/story\/the-ai-revolution-in-medicine-revisited\/\" aria-describedby=\"the-ai-revolution-in-medicine-revisited\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"The AI Revolution in Medicine, Revisited\" target=\"_blank\">\n\t\t\t\t\t\t\tListen now\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h2 class=\"wp-block-heading\" id=\"how-it-works\">How it works<\/h2>\n\n\n\n<p>AutoAdapt starts from a practical observation: teams don\u2019t just need a better prompt or more data, they need a decision process that reliably maps a task, its domain data, and real constraints to an approach that works. To do this, AutoAdapt treats domain adaptation as a constrained planning problem. Given an objective provided in natural language, dataset size and format, and limits on latency, hardware, privacy, and cost, it provides an end-to-end pipeline that teams can execute and deploy.<\/p>\n\n\n\n<p>Domain adaptation often feels like trial and error because the design space is large and complex. Teams must choose among approaches such as RAG, supervised fine-tuning, parameter-efficient methods (such as LoRA), and alignment steps, each with many hyperparameters. These choices interact in nonobvious ways, and not all combinations are valid, making it difficult to identify a reliable strategy. The problem is compounded by the high cost of LLM training, which limits how many configurations can be explored.<\/p>\n\n\n\n<p>AutoAdapt addresses this with the Adaptation Configuration Graph (ACG), a structured representation of the system&#8217;s configuration space that enables efficient search while guaranteeing valid pipelines.<\/p>\n\n\n\n<p>Building on the ACG, AutoAdapt uses a planning agent to make and justify decisions. It proposes strategies, evaluates them against user requirements, and iterates until the plan is feasible and well-grounded. Rather than optimizing in an unconstrained black box, AutoAdapt roots each decision in best practices and explicit constraints, producing an executable workflow with parameter ranges.<\/p>\n\n\n\n<p>Finally, AutoAdapt introduces AutoRefine, a budget-aware refinement loop that optimizes hyperparameters by strategically selecting which experiments to run next, even under limited feedback. AutoRefine replaces weeks of manual tuning with a more disciplined, reproducible process that is easier to audit and compare across projects. In real-world systems such as healthcare documentation, legal workflows, or incident response, this level of rigor is essential. Figure 1 illustrates the end-to-end workflow.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"683\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_overview_fig1_1400px.png\" alt=\"A workflow diagram illustrating how a user\u2019s task description and constraints are processed to automatically produce a deployable language model. User inputs are analyzed and refined through multiple stages, including multi\u2011agent proposal and critique, best\u2011practice consultation, and iterative pipeline refinement. These stages evaluate task requirements, data choices, and model configurations while verifying user constraints. The process concludes with an executable plan that generates a final model meeting the specified objectives and constraints.\" class=\"wp-image-1168794\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_overview_fig1_1400px.png 1400w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_overview_fig1_1400px-300x146.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_overview_fig1_1400px-1024x500.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_overview_fig1_1400px-768x375.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_overview_fig1_1400px-240x117.png 240w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><figcaption class=\"wp-element-caption\">Figure 1. The AutoAdapt workflow, showing how user inputs flow through planning and refinement to produce a deployable model. <\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"evaluation\">Evaluation<\/h2>\n\n\n\n<p>In experiments, AutoAdapt consistently identifies effective adaptation strategies and delivers improvements across a range of benchmark and real-world tasks, including reasoning, question answering, coding, classification, and cloud-incident diagnosis. It uses constraint-aware planning and budgeted refinement to find better-performing configurations with minimal added time and cost, making the process practical for production teams. Figures 2 and 3 show aggregate performance against competitive baselines.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"513\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_results_fig3_1400px.png\" alt=\"Three radar plots compare multiple methods across several datasets using success rate, normalized performance score, and a cumulative metric. In all three plots, the AutoAdapt method consistently exhibits larger coverage across most tasks, indicating stronger overall performance. Baseline methods show more uneven profiles, with strengths limited to specific datasets or metrics. The visualization highlights AutoAdapt\u2019s robust and consistent advantage relative to existing approaches.\" class=\"wp-image-1168793\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_results_fig3_1400px.png 1400w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_results_fig3_1400px-300x110.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_results_fig3_1400px-1024x375.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_results_fig3_1400px-768x281.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_results_fig3_1400px-240x88.png 240w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><figcaption class=\"wp-element-caption\">Figure 2. Success rate (SR), normalized performance score (NPS), and cumulative score (CS) comparing AutoAdapt with baseline methods across datasets. Higher scores indicate better performance, with AutoAdapt outperforming state-of-the-art baselines.<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"288\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_cost_time_overhead_fig2_1400px.png\" alt=\"Two bar charts compare time and cost overheads for AutoAdapt relative to a default baseline across multiple datasets. AutoAdapt introduces only a small additional time requirement, averaging around half an hour, while achieving noticeable performance improvements. The cost comparison shows a similarly modest increase, with average extra cost remaining low across tasks. Overall, the figure indicates that AutoAdapt delivers performance gains with minimal additional time and financial overhead.\" class=\"wp-image-1168795\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_cost_time_overhead_fig2_1400px.png 1400w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_cost_time_overhead_fig2_1400px-300x62.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_cost_time_overhead_fig2_1400px-1024x211.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_cost_time_overhead_fig2_1400px-768x158.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt_cost_time_overhead_fig2_1400px-240x49.png 240w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><figcaption class=\"wp-element-caption\">Figure 3. AutoAdapt achieves performance gains with minimal overhead, approximately 30 minutes of additional time and $4 in additional cost.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"implications-and-looking-forward\">Implications and looking forward<\/h2>\n\n\n\n<p>The broader significance of AutoAdapt is that domain adaptation can become an engineering discipline, not an ad hoc process. By making key choices explicit\u2014what to adapt, how to adapt it, and which constraints the system must satisfy\u2014AutoAdapt helps teams reach results faster, reproduce them more easily, and audit them more rigorously. This shift is especially important in domains where drift from pretrained knowledge is common and failures are costly. When LLMs are used to draft clinical notes, triage support incidents, or summarize regulatory language, organizations need a clear, repeatable path from data to models that behave predictably under latency, privacy, and budget requirements.<\/p>\n\n\n\n<p>Because domain adaptation is a prerequisite for deploying LLMs in real-world settings, we\u2019re\u202fmaking the&nbsp;AutoAdapt&nbsp;framework&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/microsoft\/AutoAdapt\" target=\"_blank\" rel=\"noopener noreferrer\">open&nbsp;source<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> to give teams a concrete starting point. The <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/AutoAdapt?tab=readme-ov-file#installation-and-quick-start\">README<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> file provides installation and quick-start instructions.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<div class=\"yt-consent-placeholder\" role=\"region\" aria-label=\"Video playback requires cookie consent\" data-video-id=\"z3JMQqDwCOc\" data-poster=\"https:\/\/img.youtube.com\/vi\/z3JMQqDwCOc\/maxresdefault.jpg\"><iframe aria-hidden=\"true\" tabindex=\"-1\" title=\"AutoAdapt demo\" width=\"500\" height=\"281\" data-src=\"https:\/\/www.youtube-nocookie.com\/embed\/z3JMQqDwCOc?feature=oembed&rel=0&enablejsapi=1\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><div class=\"yt-consent-placeholder__overlay\"><button class=\"yt-consent-placeholder__play\"><svg width=\"42\" height=\"42\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" aria-hidden=\"true\" focusable=\"false\"><g fill=\"none\" fill-rule=\"evenodd\"><circle fill=\"#000\" opacity=\".556\" cx=\"21\" cy=\"21\" r=\"21\"\/><path stroke=\"#FFF\" d=\"M27.5 22l-12 8.5v-17z\"\/><\/g><\/svg><span class=\"yt-consent-placeholder__label\">Video playback requires cookie consent<\/span><\/button><\/div><\/div>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Deploying large language models (LLMs) in real-world, high-stakes settings is harder than it should be. In high-stakes settings like law, medicine, and cloud incident response, performance and reliability can quickly break down because adapting models to domain-specific requirements is a slow and manual process that is difficult to reproduce. The core challenge is domain adaptation, [&hellip;]<\/p>\n","protected":false},"author":43868,"featured_media":1169326,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-1168790","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199562,199565],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[793670,811276],"related-projects":[],"related-events":[],"related-researchers":[{"type":"guest","value":"sidharth-sinha","user_id":"1168798","display_name":"Sidharth Sinha","author_link":"<a href=\"https:\/\/www.linkedin.com\/in\/sidharth-sinha-8790171b9\/\" aria-label=\"Visit the profile page for Sidharth Sinha\">Sidharth Sinha<\/a>","is_active":true,"last_first":"Sinha, Sidharth","people_section":0,"alias":"sidharth-sinha"},{"type":"user_nicename","value":"Anson Bastos","user_id":43958,"display_name":"Anson Bastos","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ansonbastos\/\" aria-label=\"Visit the profile page for Anson Bastos\">Anson Bastos<\/a>","is_active":false,"last_first":"Bastos, Anson","people_section":0,"alias":"ansonbastos"},{"type":"user_nicename","value":"Xuchao Zhang","user_id":42045,"display_name":"Xuchao Zhang","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xuchaozhang\/\" aria-label=\"Visit the profile page for Xuchao Zhang\">Xuchao Zhang<\/a>","is_active":false,"last_first":"Zhang, Xuchao","people_section":0,"alias":"xuchaozhang"},{"type":"user_nicename","value":"Akshay Nambi","user_id":38169,"display_name":"Akshay Nambi","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/akshayn\/\" aria-label=\"Visit the profile page for Akshay Nambi\">Akshay Nambi<\/a>","is_active":false,"last_first":"Nambi, Akshay","people_section":0,"alias":"akshayn"},{"type":"user_nicename","value":"Rujia Wang","user_id":42549,"display_name":"Rujia Wang","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/rujiawang\/\" aria-label=\"Visit the profile page for Rujia Wang\">Rujia Wang<\/a>","is_active":false,"last_first":"Wang, Rujia","people_section":0,"alias":"rujiawang"},{"type":"user_nicename","value":"Chetan Bansal","user_id":31394,"display_name":"Chetan Bansal","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chetanb\/\" aria-label=\"Visit the profile page for Chetan Bansal\">Chetan Bansal<\/a>","is_active":false,"last_first":"Bansal, Chetan","people_section":0,"alias":"chetanb"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-960x540.jpg\" class=\"img-object-cover\" alt=\"Three white line icons in a row; a document list, a workflow, and process wheel against a blue and purple gradient background.\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2026\/04\/AutoAdapt-BlogHeroFeature-1400x788-1.jpg 1400w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"April 22, 2026","formattedExcerpt":"Deploying large language models (LLMs) in real-world, high-stakes settings is harder than it should be. In high-stakes settings like law, medicine, and cloud incident response, performance and reliability can quickly break down because adapting models to domain-specific requirements is a slow and manual process that&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1168790","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/43868"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1168790"}],"version-history":[{"count":16,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1168790\/revisions"}],"predecessor-version":[{"id":1169416,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1168790\/revisions\/1169416"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1169326"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1168790"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1168790"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1168790"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1168790"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1168790"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1168790"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1168790"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1168790"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1168790"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1168790"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1168790"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}