Guidelines for responsible AI

Put responsible AI into practice with these guidelines designed to help you anticipate and address potential issues throughout the software development lifecycle.

Tools for responsible AI

Research, open source projects, and Azure Machine Learning all designed to help developers and data scientists understand, protect, and control AI systems.

Understand Protect Control

InterpretML

InterpretML is a package used for training interpretable machine learning models and explaining blackbox systems.

Fairlearn

Fairlearn empowers developers of AI systems to assess their systems' fairness and mitigate any negative impacts for groups of people, such as those defined in terms of race, gender, age, or disability status.


Conversations on responsible AI

Explore podcasts, webinars, and other resources from experts across Microsoft.