General Header Image Responsible AI

Ideating Responsible AI Mitigations

News: Our slides from the FAccT Tutorial on Responsible AI Toolbox are available here.

ML algorithms and systems are often prone to severe bias and highly consequential failure modes that are not well understood. This project advances the methods, tools, and infrastructure for debugging and mitigating these failure modes so practitioners may act on them before deploying ML systems in the real world. The project is part of Responsible AI Toolbox (opens in new tab), a larger collaborative effort between Microsoft Research, AETHER, and Azure Machine Learning for integrating and building development tools for responsible AI.

The goal of this project is two-fold: 

  1. Building tools that enable ML engineers to identify, diagnose, and mitigate problems quickly and systematically. 
  2. Conducting research that supports the above processes by better understanding and improving algorithmic robustness and failure explainability for different model architectures and data types.
Flowchart showing how responsible AI tools are used together for targeted debugging of machine learning models: the Responsible AI Dashboard for the identification of failures; followed by the Responsible AI Dashboard and Mitigations Library for the diagnosis of failures; then the Responsible AI Mitigations Library for mitigating failures; and lastly the Responsible AI Tracker for tracking, comparing, and validating mitigation techniques from which an arrow points back to the identification phase of the cycle to indicate the repetition of the process as models and data continue to evolve during the ML lifecycle.

Recent releases:

  • Responsible AI Tracker (opens in new tab)A Jupyter Lab extension for managing, tracking, and comparing different RAI mitigation experiments. The goal is to accelerate improvement iterations for ML practitioners by enabling them to experiment and compare mitigation results quickly. 
  • Responsible AI Mitigations library (opens in new tab) is the ML backend support for targeted mitigation steps that can be used in the Responsible AI Tracker or in any other RAI tool. The designed functionalities guide model improvement by targeting mitigations to errors that affect particular data cohorts, with the goal of reducing performance discrepancies across cohorts. 

Past releases: In collaboration with Azure Machine Learning, AETHER, and the Mixed Reality Group, we have built the following Responsible AI tools: