The need

Responsible AI is the practice of upholding AI principles when designing, building, and using AI systems. To operationalize responsible AI, the next step is to put these principles into practice to help identify and mitigate responsible AI concerns.

The idea

To support these practitioners, we developed a dashboard that brings together a collection of responsible AI capabilities that can be used throughout the machine learning lifecycle, enabling them to optimize their models for fairness, explainability, and other desired characteristics.

The solution

The Responsible AI dashboard is a single pane of glass to operationalize responsible AI, available on open source and Azure Machine Learning. It enables the practitioners to assess and debug machine learning models and improve product quality. It also provides insight into these models for business decision makers, so they can make more informed decisions.

Technical details for Responsible AI Dashboard

The Responsible AI dashboard brings together a variety of responsible AI capabilities that are meant to easily communicate with each other to facilitate deep-dive investigations, without having to manually save and reload results in different dashboards.

Model Statistics can be used to understand how a model performs across different metrics and subgroups and inform which should be the first debugging steps.

Data Explorer helps visualize datasets based on predicted and actual outcomes, error groups, and specific features. This helps to identify issues of over- and underrepresentation and to see how data is generally clustered in the dataset.

Error Analysis identifies cohorts of data with higher error rate than the overall benchmark. This tool helps practitioners gain a deeper understanding of how errors are distributed across a dataset and aggregates performance metrics to discover model errors.

Model Interpretability, powered by InterpretML, helps users understand their model's global explanations, or the reasons behind individual predictions. Ultimately, this tool helps practitioners learn more about their model predictions, uncover potential sources of unfairness, and determine how trustworthy an AI model is.

Counterfactual Analysis, powered by DiCE, shows feature-perturbed versions of the same datapoint that would have received a different prediction outcome. This enables practitioners to debug specific input instances and determine the actions end users need to perform to produce a specific outcome.

Causal Inference, powered by EconML, focuses on answering what if-style questions to apply data-driven decision-making. This tool can assist with identifying which features have the most direct effect on a specific outcome and simulating feature responses to various interventions. It allows decision makers to apply new policies and effect real-world change.

Resources:

Learn more about Responsible AI:

Projects related to Responsible AI dashboard

Browse more AI Lab projects

Explore the possibilities of AI

Make artificial intelligence real for your business today.