Responsible AI resources
Explore resources designed to help you responsibly use AI at every stage of innovation - from concept to development, deployment, and beyond.
Guidelines
Read guidelines to help to broaden your understanding of AI topics and of your systems’ behavior.
Management tools
Management tools are resources that you can leverage to build, improve, and implement your AI systems.
Technology tools
Technology tools enable you to build the properties of your AI systems with fairness, privacy, security, and other responsible AI guarantees.

HAX Workbook
The HAX Workbook supports early planning and collaboration between UX, AI, PM, and engineering disciplines and helps drive alignment on product requirements across teams.

AI fairness checklist
This checklist helps prioritize fairness when developing AI. By operationalizing concepts, fairness checklists provide structure for improving ad-hoc processes and empowering advocates.

Fairlearn
Fairlearn empowers AI developers to assess their systems' fairness and mitigate any negative impacts for groups of people, such as those defined in terms of race, gender, age, or disability status.

InterpretML
InterpretML is an open-source package used for training interpretable glassbox machine learning models and explaining blackbox systems.

Error Analysis
Error Analysis is a toolkit that enables you to identify cohorts with higher error rates and diagnose the root causes behind them to better inform your mitigation strategies.

Counterfit
Microsoft created open source tool Counterfit to help organizations assess AI security risks, allowing developers to ensure that their algorithms are robust, reliable, and trustworthy.

Human AI Interaction Guidelines
The Human-AI Interaction Guidelines synthesize 20 years of research into 18 recommended guidelines for designing AI systems across the user interaction and solution lifecycle.

HAX Design Patterns
The HAX design patterns provide common ways of implementing the HAX Guidelines. The patterns are UI-independent and can be implemented in a variety of systems and interfaces.

HAX Playbook
The HAX Playbook is an interactive tool for generating interaction scenarios to test when designing user-facing AI systems, before building out a fully functional system.

AI Security Guidance
In collaboration with Harvard University, we share a series of findings that can protect your AI services with guidance materials for modeling, detecting, and mitigating security risks and ethics issues.

Inclusive Design Guidelines
These guidelines can help you build AI systems that enable and draw on the full range of human diversity.

Conversational AI guidelines
Learn how to design bots that put people first and build trust in your services, using guidelines for responsible conversational AI.

SmartNoise
Differential privacy (DP) adds a carefully tuned amount of statistical noise to sensitive data, helping to protect data used in AI systems by preventing re-identification.

Presidio
Presidio is an open-source library for data protection and anonymization for text and images.

Dataset Documentation
Consider these questions to help prioritize transparency by creating documentation for your ML datasets.

Confidential computing for ML
Azure confidential computing provides data security using trusted execution environments or encryption, providing protection of sensitive data across the machine learning life cycle.

SEAL Homomorphic Encryption
SEAL uses open-source homomorphic encryption technology to allow computations to be performed on encrypted data while preventing private data from being exposed to cloud operators.

Responsible AI Toolbox
The Responsible AI Toolbox is an open source framework for helping engineers build products that are reliable. The toolbox integrates ideas from several open source tools in the area of error analysis, interpretability, fairness, counterfactual analysis, and causal decision-making.

Human AI eXperience (HAX) Toolkit
HAX Toolkit is a set of tools that enables you to build effective and responsible human-AI interaction. It includes the Guidelines for Human-AI Interaction, the HAX Workbook, Design Patterns, and the HAX Playbook. Every resource is grounded in observed needs and validated through rigorous research and pilots with practitioner teams.

Responsible AI with Dr. Eric Horvitz
Dr. Eric Horvitz, Microsoft Chief Scientific Officer, talks with Sam Charrington on the TWIML podcast, diving into how responsible AI is a critical part of innovation across organizations.

Responsible AI with Brad Smith
Brad Smith, president of Microsoft, talks about his personal career journey, six core RAI principles at Microsoft, top ten tech issues for the next decade, and more.

Potential and Pitfalls of AI
Microsoft Chief Scientific Officer Dr. Eric Horvitz talks about his journey in Microsoft Research, the potential and pitfalls he sees in AI, how AI can help countries like India, and much more.

Machine Learning and Fairness
In the ML and Fairness webinar with Microsoft researchers Dr. Jenn Wortman Vaughan and Dr. Hanna Wallach, learn how to detect and mitigate fairness issues in your ML development.

Cryptography for the Post-Quantum World
Dr. Brian LaMacchia gives us an inside look at the world of cryptography and the number theory behind it.

Advancing accessibility
Dr. Meredith Ringel Morris explores ethical challenges such as inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability that must be considered in the development of AI.

CHI squared
Dr. Ken Hinckley and Dr. Meredith Ringel Morris identify potential areas of concern regarding how several AI technology categories may impact particular disability constituencies if care is not taken in their design, development, and testing.

Responsible AI with Dr. Saleema Amershi
Dr. Amershi talks about life at the intersection of AI and HCI and does a little AI myth-busting. She also gives us an overview of what – and who – it takes to build responsible AI systems and how a personal desire to make her own life easier may make your life easier too.

Transparency and Intelligibility
Explore how to best incorporate transparency into the machine learning life cycle in this webinar led by Dr. Jenn Wortman Vaughan, a Senior Principal Researcher at Microsoft. You will learn about traceability, communication, and intelligibility—as well as the importance of taking a human-centered approach.

Life at the Intersection of AI & Society
Dr. Ece Kamar, a senior researcher in the Adaptive Systems and Interaction Group at Microsoft Research, is working to help us understand AI’s far-reaching implications. She talks about the complementarity between humans and machines, debunks some common misperceptions about AI, and more.

Responsible AI in Practice with Sarah Bird
Dr. Sarah Bird, Principal Program Manager at Microsoft, meets with Sam Charrington from TWiML to discuss a handful of new tools from Microsoft focused on responsible machine learning.

Human-Centered Design with Mira Lane
Mira Lane, Partner Director for Ethics and Society at Microsoft meets with Sam Charrington from TWiML to discuss how she defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be implemented across large engineering organizations.

Social Impacts of Artificial Intelligence
Dr. Fernando Diaz, a Principal Research Manager at Microsoft Research Montreal, shares his insights on the kinds of questions we need to be asking about artificial intelligence and its impact on society.

Tales from the Crypt(ography) Lab
Dr. Kristin Lauter talks about how homomorphic encryption – part of the field of Private AI – allows us to operate on, while still protecting, our most sensitive data.

Keeping an Eye on AI
Dr. Kate Crawford talks about both the promises and problems of AI, why bigger isn’t necessarily better, and how we can adopt AI design principles that empower people to shape their technical tools.