Guidelines for responsible AI

Put responsible AI into practice with these guidelines designed to help you anticipate and address potential issues throughout the software development lifecycle.

Tools for responsible AI

Research, open source projects, and Azure Machine Learning all designed to help developers and data scientists understand, protect, and control AI systems.

Understand Protect Control

Error Analysis

Error Analysis is a toolkit that enables you to identify cohorts with higher error rates and diagnose the root causes behind them in order to better inform your mitigation strategies.


InterpretML is a package used for training interpretable machine learning models and explaining blackbox systems.


Fairlearn empowers developers of AI systems to assess their systems' fairness and mitigate any negative impacts for groups of people, such as those defined in terms of race, gender, age, or disability status.

Research supporting responsible AI

Explore new contributions to the advancement of responsible AI practices, techniques, and technologies.

Conversations on responsible AI

Explore podcasts, webinars, and other resources from experts across Microsoft.