
AI Lab project: Responsible AI dashboard
Learn about breakthrough AI innovation with hands-on labs, code resources, and deep dives.
Responsible AI Dashboard
How can we empower ML practitioners and business stakeholders to operationalize responsible AI in practice? We created a dashboard that brings together responsible AI tools for model debugging and decision-making.
Try the demo
The need
Responsible AI is the practice of upholding AI principles when designing, building, and using AI systems. To operationalize responsible AI, the next step is to put these principles into practice to help identify and mitigate responsible AI concerns.
The idea
To support these practitioners, we developed a dashboard that brings together a collection of responsible AI capabilities that can be used throughout the machine learning lifecycle, enabling them to optimize their models for fairness, explainability, and other desired characteristics.
The solution
The Responsible AI dashboard is a single pane of glass to operationalize responsible AI, available on open source and Azure Machine Learning. It enables the practitioners to assess and debug machine learning models and improve product quality. It also provides insight into these models for business decision makers, so they can make more informed decisions.
Deploying responsible machine learning models
Learn how the Responsible AI dashboard, which integrates several open-source toolkits, allows practitioners to flow through different stages of model debugging and decision-making in a singular place.

Enabling practitioners to create responsible AI systems
The Responsible AI dashboard was developed within the Responsible AI Toolbox, a suite of tools for a customized, responsible AI experience with unique and complementary functionalities. The capabilities are open source on GitHub or can be leveraged through Azure Machine Learning.
Technical details for Responsible AI Dashboard
The Responsible AI dashboard brings together a variety of responsible AI capabilities that are meant to easily communicate with each other to facilitate deep-dive investigations, without having to manually save and reload results in different dashboards.
Model Statistics can be used to understand how a model performs across different metrics and subgroups and inform which should be the first debugging steps.
Data Explorer helps visualize datasets based on predicted and actual outcomes, error groups, and specific features. This helps to identify issues of over- and underrepresentation and to see how data is generally clustered in the dataset.
Error Analysis identifies cohorts of data with higher error rate than the overall benchmark. This tool helps practitioners gain a deeper understanding of how errors are distributed across a dataset and aggregates performance metrics to discover model errors.
Model Interpretability, powered by InterpretML, helps users understand their model's global explanations, or the reasons behind individual predictions. Ultimately, this tool helps practitioners learn more about their model predictions, uncover potential sources of unfairness, and determine how trustworthy an AI model is.
Counterfactual Analysis, powered by DiCE, shows feature-perturbed versions of the same datapoint that would have received a different prediction outcome. This enables practitioners to debug specific input instances and determine the actions end users need to perform to produce a specific outcome.
Causal Inference, powered by EconML, focuses on answering what if-style questions to apply data-driven decision-making. This tool can assist with identifying which features have the most direct effect on a specific outcome and simulating feature responses to various interventions. It allows decision makers to apply new policies and effect real-world change.
Resources:
- Explore in-depth documentation for the Responsible AI dashboard
- Check out the Responsible AI Toolbox GitHub repository
- Read more about operationalizing responsible AI in this Tech Community blog
- Watch the AI Show video about operationalizing responsible AI
Learn more about Responsible AI:
- Read about Microsoft responsible AI principles
- Learn how Microsoft operationalizes responsible AI across teams
- Find more tools and resources for responsible AI

Responsible Conversational AI
Conversational AI opens an amazing new channel for companies to interact with their customers. To reach its full potential, conversational bots need to be developed in a way that earns people’s trust.

Synthetic Data Showcase
Sharing data from sensitive sources is critical to research but can put vulnerable data subjects at risk of being identified. We created an open-source pipeline that generates synthetic data to preserve privacy when sharing and analyzing sensitive datasets.
Explore the possibilities of AI
Make artificial intelligence real for your business today.

Create innovative AI solutions
Discover Azure AI—a portfolio of AI services designed for developers and data scientists. Take advantage of the decades of breakthrough research, responsible AI practices, and flexibility that Azure AI offers to build and deploy your own AI solutions.