{"id":917625,"date":"2023-02-27T09:00:00","date_gmt":"2023-02-27T17:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=917625"},"modified":"2023-03-03T12:30:14","modified_gmt":"2023-03-03T20:30:14","slug":"responsible-ai-the-research-collaboration-behind-new-open-source-tools-offered-by-microsoft","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/responsible-ai-the-research-collaboration-behind-new-open-source-tools-offered-by-microsoft\/","title":{"rendered":"Responsible AI: The research collaboration behind new open-source tools offered by Microsoft"},"content":{"rendered":"\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"788\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788.jpg\" alt=\"Flowchart showing how responsible AI tools are used together for targeted debugging of machine learning models: the Responsible AI Dashboard for the identification of failures; followed by the Responsible AI Dashboard and Mitigations Library for the diagnosis of failures; then the Responsible AI Mitigations Library for mitigating failures; and lastly the Responsible AI Tracker for tracking, comparing, and validating mitigation techniques from which an arrow points back to the identification phase of the cycle  to indicate the repetition of the process as models and data continue to evolve during the ML lifecycle. \" class=\"wp-image-918294\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788.jpg 1400w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-1280x720.jpg 1280w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><\/figure>\n\n\n\n<p>As computing and AI advancements spanning decades are enabling incredible opportunities for people and society, they\u2019re also raising questions about responsible development and deployment. For example, the machine learning models powering AI systems may not perform the same for everyone or every condition, potentially leading to harms related to safety, reliability, and fairness. Single metrics often used to represent model capability, such as overall accuracy, do little to demonstrate under which circumstances or for whom failure is more likely; meanwhile, common approaches to addressing failures, like adding more data and compute or increasing model size, don\u2019t get to the root of the problem. Plus, these blanket trial-and-error approaches can be resource intensive and financially costly.<\/p>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--left\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/demo-rai-toolbox-an-open-source-framework-for-building-responsible-ai\/\" target=\"_self\" aria-label=\"Responsible AI Toolbox demo\" data-bi-type=\"annotated-link\" data-bi-cN=\"Responsible AI Toolbox demo\" class=\"annotations__list-thumbnail\" >\n\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"172\" height=\"96\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU-240x135.jpg\" class=\"mb-2\" alt=\"thumbnail image of Besmira\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/11\/tGgJCrA-MZU.jpg 1280w\" sizes=\"auto, (max-width: 172px) 100vw, 172px\" \/>\t\t\t\t<\/a>\n\t\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">VIDEO<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/demo-rai-toolbox-an-open-source-framework-for-building-responsible-ai\/\" data-bi-cN=\"Responsible AI Toolbox demo\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Responsible AI Toolbox demo<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">Learn how this suite of tools can help assess machine learning models through a lens of responsible AI.<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>Through its <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/microsoft\/responsible-ai-toolbox\" target=\"_blank\" rel=\"noopener noreferrer\">Responsible AI Toolbox<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, a collection of tools and functionalities designed to help practitioners maximize the benefits of AI systems while mitigating harms, and other efforts for responsible AI, Microsoft offers an alternative: a principled approach to AI development centered around <em>targeted model improvement<\/em>. Improving models through targeting methods aims to identify solutions tailored to the <em>causes <\/em>of specific failures. This is a critical part of a model improvement life cycle that not only includes the identification, diagnosis, and mitigation of failures but also the tracking, comparison, and validation of mitigation options. The approach supports practitioners in better addressing failures without introducing new ones or eroding other aspects of model performance.<\/p>\n\n\n\n<p>\u201cWith targeted model improvement, we\u2019re trying to encourage a more systematic process for improving machine learning in research <em>and <\/em>practice,\u201d says <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/benushi\/about\/\" target=\"_blank\" rel=\"noreferrer noopener\">Besmira Nushi, a Microsoft Principal Researcher<\/a> involved with the development of tools for supporting responsible AI. She is a member of the research team behind the<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/aka.ms\/rai-mitigationstracker-blog\" target=\"_blank\" rel=\"noopener noreferrer\"> toolbox\u2019s newest additions<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>: the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/microsoft\/responsible-ai-toolbox-mitigations\" target=\"_blank\" rel=\"noopener noreferrer\">Responsible AI Mitigations Library<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, which enables practitioners to more easily experiment with different techniques for addressing failures, and the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/microsoft\/responsible-ai-toolbox-tracker\" target=\"_blank\" rel=\"noopener noreferrer\">Responsible AI Tracker<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, which uses visualizations to show the effectiveness of the different techniques for more informed decision-making.<\/p>\n\n\n\n<h2 id=\"targeted-model-improvement-from-identification-to-validation\">Targeted model improvement: From identification to validation<\/h2>\n\n\n\n<p>The tools in the Responsible AI Toolbox,<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/responsible-ai-toolbox\"><span class=\"sr-only\"> (opens in new tab)<\/span><\/a> available in open source and through the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/how-to-responsible-ai-dashboard\" target=\"_blank\" rel=\"noopener noreferrer\">Azure Machine Learning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> platform offered by Microsoft, have been designed with each stage of the model improvement life cycle in mind, informing targeted model improvement through error analysis, fairness assessment, data exploration, and interpretability.<\/p>\n\n\n\n<p>For example, the new mitigations library bolsters mitigation by offering a means of managing failures that occur in data preprocessing, such as those caused by a lack of data or lower-quality data for a particular subset. For tracking, comparison, and validation, the new tracker brings model, code, visualizations, and other development components together for easy-to-follow documentation of mitigation efforts. The tracker\u2019s main feature is <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/disaggregated-model-evaluation-and-comparison\/\" target=\"_blank\" rel=\"noreferrer noopener\">disaggregated model evaluation and comparison<\/a>, which breaks down model performance by data subset to present a clearer picture of a mitigation\u2019s effects on the intended subset, as well as other subsets, helping to uncover hidden performance declines before models are deployed and used by individuals and organizations. Additionally, the tracker allows practitioners to look at performance for subsets of data across <em>iterations<\/em> of a model to help practitioners determine the most appropriate model for deployment.<\/p>\n\n\n<div class=\"wp-block-msr-image-quote\">\n\t<div class=\"row\">\n\t\t<div class=\"col col-12 col-md-6 mb-2 mb-md-0 wp-block-msr-image-quote__image\">\n\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"360\" height=\"360\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/Besmira-Nushi_360x360.jpg\" class=\"attachment-large size-large\" alt=\"photo of Besmira Nushi smiling for the camera\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/Besmira-Nushi_360x360.jpg 360w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/Besmira-Nushi_360x360-300x300.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/Besmira-Nushi_360x360-150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/Besmira-Nushi_360x360-180x180.jpg 180w\" sizes=\"auto, (max-width: 360px) 100vw, 360px\" \/>\t\t<\/div>\n\t\t<div class=\"col col-12 col-md-6 mt-2 mt-md-0  wp-block-msr-image-quote__content\">\n\t\t\t<blockquote class=\"wp-block-quote text-gray-700 m-0 is-style-spectrum\">\n\t\t\t\t<p>\u201cData scientists could build many of the functionalities that we offer with these tools; they could build their own infrastructure,\u201d says Nushi. \u201cBut to do that for every project requires a lot of effort and time. The benefit of these tools is scale. Here, they can accelerate their work with tools that apply to multiple scenarios, freeing them up to focus on the work of building more reliable, trustworthy models.\u201d<\/p>\n\t\t\t\t\t\t\t\t\t<cite class=\"text-gray-600\">Besmira Nushi, Microsoft Principal Researcher<\/cite>\n\t\t\t\t\t\t\t<\/blockquote>\n\t\t<\/div>\n\t<\/div>\n<\/div>\n\n\n\n<p>Building tools for responsible AI that are intuitive, effective, and valuable can help practitioners consider potential harms and their mitigation from the beginning when developing a new model. The result can be more confidence that the work they\u2019re doing is supporting AI that is safer, fairer, and more reliable because it was designed that way, says Nushi. The benefits of using these tools can be far-reaching\u2014from contributing to AI systems that more fairly assess candidates for loans by having comparable accuracy across demographic groups to traffic sign detectors in self-driving cars that can perform better across conditions like sun, snow, and rain.<\/p>\n\n\n\n<h2 id=\"converting-research-into-tools-for-responsible-ai\">Converting research into tools for responsible AI<\/h2>\n\n\n\n<p>Creating tools that can have the impact researchers like Nushi envision often begins with a research question and involves converting the resulting work into something people and teams can readily and confidently incorporate in their workflows.<\/p>\n\n\n\n<p>\u201cMaking that jump from a research paper\u2019s code on GitHub to something that is usable involves a lot more process in terms of understanding what is the interaction that the data scientist would need, what would make them more productive,\u201d says Nushi. \u201cIn research, we come up with many ideas. Some of them are too fancy, so fancy that they cannot be used in the real world because they cannot be operationalized.\u201d<\/p>\n\n\n\n<p>Multidisciplinary research teams consisting of user experience researchers, designers, and machine learning and front-end engineers have helped ground the process as have the contributions of those who specialize in all things responsible AI. Microsoft Research works closely with the incubation team of <a href=\"https:\/\/www.microsoft.com\/en-us\/ai\/our-approach?activetab=pivot1:primaryr5#coreui-banner-srtl0v6?\" target=\"_blank\" rel=\"noreferrer noopener\">Aether<\/a>, the advisory body for Microsoft leadership on AI ethics and effects, to create tools based on the research. Equally important has been partnership with product teams whose mission is to operationalize AI responsibly, says Nushi. For Microsoft Research, that is often <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/azure.microsoft.com\/en-us\/products\/machine-learning\/#product-overview\" target=\"_blank\" rel=\"noopener noreferrer\">Azure Machine Learning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, the Microsoft platform for end-to-end ML model development. Through this relationship, Azure Machine Learning can offer what <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/mehrnoosh-sameki-a2a02245\/\" target=\"_blank\" rel=\"noopener noreferrer\">Microsoft Principal PM Manager Mehrnoosh Sameki<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> refers to as customer \u201csignals,\u201d essentially a reliable stream of practitioner wants and needs directly from practitioners on the ground. And, Azure Machine Learning is just as excited to leverage what Microsoft Research and Aether have to offer: cutting-edge science. The relationship has been fruitful.<\/p>\n\n\n\n<p>As the current Azure Machine Learning platform made its debut five years ago, it was clear tooling for responsible AI was going to be necessary. In addition to aligning with the Microsoft vision for AI development, customers were <em>seeking out<\/em> such resources. They approached the Azure Machine Learning team with requests for explainability and interpretability features, robust model validation methods, and fairness assessment tools, recounts Sameki, who leads the Azure Machine Learning team in charge of tooling for responsible AI. Microsoft Research, Aether, and Azure Machine Learning teamed up to integrate tools for responsible AI into the platform, including <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/interpret.ml\/\" target=\"_blank\" rel=\"noopener noreferrer\">InterpretML<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for understanding model behavior, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/erroranalysis.ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">Error Analysis<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for identifying data subsets for which failures are more likely, and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/fairlearn.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">Fairlearn<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for assessing and mitigating fairness-related issues. InterpretML and Fairlearn are independent community-driven projects that power several Responsible AI Toolbox functionalities.<\/p>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--left\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/blogs.microsoft.com\/ai-for-business\/building-ai-responsibly-from-research-to-practice\/\" target=\"_self\" aria-label=\"Building AI responsibly from research to practice\" data-bi-type=\"annotated-link\" data-bi-cN=\"Building AI responsibly from research to practice\" class=\"annotations__list-thumbnail\" >\n\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"172\" height=\"96\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-240x135.jpg\" class=\"mb-2\" alt=\"several people working together seated around a table\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-1536x864.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/AI-Innovation-and-RAI-hero-1920x1080-1.jpg 1920w\" sizes=\"auto, (max-width: 172px) 100vw, 172px\" \/>\t\t\t\t<\/a>\n\t\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">ARTICLE<\/span>\n\t\t\t<a href=\"https:\/\/blogs.microsoft.com\/ai-for-business\/building-ai-responsibly-from-research-to-practice\/\" data-bi-cN=\"Building AI responsibly from research to practice\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Building AI responsibly from research to practice<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">Putting AI principles into action requires new kinds of engineering tools. <\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>Before long, Azure Machine Learning approached Microsoft Research with another signal: customers wanted to use the tools together, in one interface. The research team responded with an approach that enabled interoperability, allowing the tools to exchange data and insights, facilitating a seamless ML debugging experience. Over the course of two to three months, the teams met weekly to conceptualize and design \u201ca single pane of glass\u201d from which practitioners could use the tools collectively. As Azure Machine Learning developed the project, Microsoft Research stayed involved, from providing design expertise to contributing to how the story and capabilities of what had become <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/microsoft\/responsible-ai-toolbox#introducing-responsible-ai-dashboard\" target=\"_blank\" rel=\"noopener noreferrer\">Responsible AI dashboard<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> would be communicated to customers.<\/p>\n\n\n\n<p>After the release, the teams dived into the next open challenge: enabling practitioners to better mitigate failures. Enter the Responsible AI Mitigations Library and the Responsible AI Tracker, which were developed by Microsoft Research in collaboration with Aether. Microsoft Research was well-equipped with the resources and expertise to figure out the most effective visualizations for doing disaggregated model comparison (there was very little previous work available on it) and navigating the proper abstractions for the complexities of applying different mitigations to different subsets of data with a flexible, easy-to-use interface. Throughout the process, the Azure team provided insight into how the new tools fit into the existing infrastructure.<\/p>\n\n\n\n<p>With the Azure team bringing practitioner needs and the platform to the table and research bringing the latest in model evaluation, responsible testing, and the like, it is the perfect fit, says Sameki.<\/p>\n\n\n\n<h2 id=\"an-open-source-approach-to-tooling-for-responsible-ai\">An open-source approach to tooling for responsible AI<\/h2>\n\n\n\n<p>While making these tools available through Azure Machine Learning supports customers in bringing their products and services to market responsibly, making these tools <em>open source<\/em> is important to cultivating an even larger landscape of responsibly developed AI. When release ready, these tools for responsible AI are made open source and then integrated into the Azure Machine Learning platform. The reasons for going with an open-source-first approach are numerous, say Nushi and Sameki:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>freely available tools for responsible AI are an educational resource for learning and teaching the practice of responsible AI;<\/li>\n\n\n\n<li>more contributors, both internal to Microsoft and external, add quality, longevity, and excitement to the work and topic; and<\/li>\n\n\n\n<li>the ability to integrate them into any platform or infrastructure encourages more widespread use.<\/li>\n<\/ul>\n\n\n\n<p>The decision also represents one of the <a href=\"https:\/\/www.microsoft.com\/en-us\/ai\/our-approach?activetab=pivot1:primaryr5\" target=\"_blank\" rel=\"noreferrer noopener\">Microsoft AI principles<\/a> in action\u2014transparency.<\/p>\n\n\n<div class=\"wp-block-msr-image-quote\">\n\t<div class=\"row\">\n\t\t<div class=\"col col-12 col-md-6 mb-2 mb-md-0 wp-block-msr-image-quote__image\">\n\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"360\" height=\"360\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/Mehrnoosh-Sameki_360x360.jpg\" class=\"attachment-large size-large\" alt=\"photo of Mehrnoosh Sameki smiling for the camera\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/Mehrnoosh-Sameki_360x360.jpg 360w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/Mehrnoosh-Sameki_360x360-300x300.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/Mehrnoosh-Sameki_360x360-150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/Mehrnoosh-Sameki_360x360-180x180.jpg 180w\" sizes=\"auto, (max-width: 360px) 100vw, 360px\" \/>\t\t<\/div>\n\t\t<div class=\"col col-12 col-md-6 mt-2 mt-md-0  wp-block-msr-image-quote__content\">\n\t\t\t<blockquote class=\"wp-block-quote text-gray-700 m-0 is-style-spectrum is-style-spectrum--green-orange\">\n\t\t\t\t<p>\u201cIn the space of responsible AI, being as open as possible is the way to go, and there are multiple reasons for that,\u201d says Sameki. \u201cThe main reason is for building trust with the users and with the consumers of these tools. In my opinion, no one would trust a machine learning evaluation technique or an unfairness mitigation algorithm that is unclear and close source. Also, this field is very new. Innovating in the open nurtures better collaborations in the field.\u201d<\/p>\n\t\t\t\t\t\t\t\t\t<cite class=\"text-gray-600\">Mehrnoosh Sameki, Microsoft Principal PM Manager<\/cite>\n\t\t\t\t\t\t\t<\/blockquote>\n\t\t<\/div>\n\t<\/div>\n<\/div>\n\n\n\n<h2 id=\"looking-ahead\">Looking ahead<\/h2>\n\n\n\n<p>AI capabilities are only advancing. The larger research community, practitioners, the tech industry, government, and other institutions are working in different ways to steer these advancements in a direction in which AI is contributing value and its potential harms are minimized. Practices for responsible AI will need to continue to evolve with AI advancements to support these efforts.<\/p>\n\n\n\n<p>For Microsoft researchers like Nushi and product managers like Sameki, that means fostering cross-company, multidisciplinary collaborations in their continued development of tools that encourage targeted model improvement guided by the step-by-step process of identification, diagnosis, mitigation, and comparison and validation\u2014wherever those advances lead.<\/p>\n\n\n\n<p>\u201cAs we get better in this, I hope we move toward a more systematic process to understand what data is actually useful, even for the large models; what is harmful that really shouldn\u2019t be included in those; and what is the data that has a lot of ethical issues if you include it,\u201d says Nushi. \u201cBuilding AI responsibly is crosscutting, requiring perspectives and contributions from internal teams and external practitioners. Our growing collection of tools shows that effective collaboration has the potential to impact\u2014for the better\u2014how we create the new generation of AI systems.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As computing and AI advancements spanning decades are enabling incredible opportunities for people and society, they\u2019re also raising questions about responsible development and deployment. For example, the machine learning models powering AI systems may not perform the same for everyone or every condition, potentially leading to harms related to safety, reliability, and fairness. Single metrics [&hellip;]<\/p>\n","protected":false},"author":42183,"featured_media":918294,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[],"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-917625","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-960x540.jpg\" class=\"img-object-cover\" alt=\"Flowchart showing how responsible AI tools are used together for targeted debugging of machine learning models: the Responsible AI Dashboard for the identification of failures; followed by the Responsible AI Dashboard and Mitigations Library for the diagnosis of failures; then the Responsible AI Mitigations Library for mitigating failures; and lastly the Responsible AI Tracker for tracking, comparing, and validating mitigation techniques from which an arrow points back to the identification phase of the cycle to indicate the repetition of the process as models and data continue to evolve during the ML lifecycle.\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/02\/RAI_blog-2023Feb_hero_1400x788.jpg 1400w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"February 27, 2023","formattedExcerpt":"As computing and AI advancements spanning decades are enabling incredible opportunities for people and society, they\u2019re also raising questions about responsible development and deployment. For example, the machine learning models powering AI systems may not perform the same for everyone or every condition, potentially leading&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/917625","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42183"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=917625"}],"version-history":[{"count":24,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/917625\/revisions"}],"predecessor-version":[{"id":920541,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/917625\/revisions\/920541"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/918294"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=917625"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=917625"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=917625"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=917625"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=917625"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=917625"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=917625"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=917625"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=917625"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=917625"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=917625"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}