Guidelines for responsible AI

Find guidelines designed to help you anticipate and address potential issues throughout the software development life cycle, putting responsible AI into practice.

Tools for responsible AI

Research, open source projects and Azure Machine Learning are all designed to help developers and data scientists understand, protect and control AI systems.

Understand Protect Control


InterpretML is an open source Python package for training interpretable machine learning models and explaining blackbox systems.


The Fairlearn open source toolkit empowers developers of AI systems to assess their systems’ fairness and mitigate any negative impacts for groups of people, such as those defined in terms of race, gender, age or disability status.

Conversations on responsible AI

Explore podcasts, webinars and other resources from experts across Microsoft.