Responsible AI resources
Explore resources designed to help you responsibly use AI at every stage of innovation – from concept to development, deployment and beyond.

Human-AI interaction guidelines
Use guidelines for designing AI systems across the user interaction and solution life cycle.

Conversational AI guidelines
Learn how to design bots that put people first and build trust in your services, using guidelines for responsible conversational AI.

Inclusive design guidelines
These guidelines can help you build AI systems that enable and draw on the full range of human diversity.

AI fairness checklist
This checklist can help you prioritise fairness when developing AI systems.

Datasheets for datasets template
Consider these questions to help prioritise transparency by creating datasheets for the datasets involved in your AI systems.
Understand
AI systems can behave unexpectedly for a variety of reasons. These software tools can help you understand the behaviour of your AI systems, so that you can better tailor them to your needs.

InterpretML
InterpretML is an open source Python package for training interpretable machine learning models and explaining blackbox systems.
Fairlearn
The Fairlearn open source toolkit empowers developers of AI systems to assess their systems’ fairness and mitigate any negative impacts for groups of people, such as those defined in terms of race, gender, age or disability status.

Research supporting responsible AI
Microsoft researchers are working with the broader academic community on the advancement of responsible AI practices and technologies. Our research collection page provides an overview of some key areas where our researchers are working towards more responsible and trustworthy AI systems.

Responsible AI with Brad Smith
Brad Smith, president of Microsoft, talks about his personal career journey, six core RAI principles at Microsoft, top ten tech issues for the next decade, and more.

Potential and Pitfalls of AI
Microsoft Chief Scientific Officer Dr Eric Horvitz talks about his journey in Microsoft Research, the potential and pitfalls he sees in AI, how AI can help countries like India, and much more.

Machine Learning and Fairness
In the ML and Fairness webinar from Microsoft Senior Principal Researchers Dr Jenn Wortman Vaughan and Dr Hanna Wallach, learn how to make detecting and mitigating fairness issues a priority in your ML development. In the podcast, Hanna digs deeper into these topics.

Keeping an Eye on AI
Dr Kate Crawford talks about both the promises and the problems of AI, why bigger isn’t necessarily better and how we can adopt AI design principles that empower people to shape their technical tools.

Cryptography for the Post-Quantum World
Dr Brian LaMacchia gives us an inside look at the world of cryptography and the number theory behind it.

Advancing accessibility
Dr Meredith Ringel Morris explores ethical challenges such as how inclusivity, bias, privacy, error, expectation setting, simulated data and social acceptability must be considered in the development of AI.

CHI squared
Dr Ken Hinckley and Dr Meredith Ringel Morris identify potential areas of concern regarding how several AI technology categories may impact particular disability constituencies if care is not taken in their design, development and testing.

Responsible AI with Dr Saleema Amershi
Dr Amershi talks about life at the intersection of AI and HCI and does a little AI myth-busting. She also gives us an overview of what – and who – it takes to build responsible AI systems and reveals how a personal desire to make her own life easier may make your life easier too.

Transparency and Intelligibility
Explore how to best incorporate transparency into the machine learning life cycle in this webinar led by Dr Jenn Wortman Vaughan, a Senior Principal Researcher at Microsoft. You will learn about traceability, communication and intelligibility – and the importance of taking a human-centred approach.

Life at the Intersection of AI & Society
Dr Ece Kamar, a senior researcher in the Adaptive Systems and Interaction Group at Microsoft Research, is working to help us understand AI’s far-reaching implications. She talks about the complementarity between humans and machines, debunks some common misperceptions about AI, and more.

Responsible AI in Practice with Sarah Bird
Dr Sarah Bird, Principal Program Manager at Microsoft, meets with Sam Charrington from TWiML to discuss a handful of new tools from Microsoft focused on responsible machine learning.

Human-Centred Design with Mira Lane
Mira Lane, Partner Director for Ethics and Society at Microsoft meets with Sam Charrington from TWiML and discusses how she defines human-centred design, its connections to culture and responsible innovation, and how these ideas can be implemented across large engineering organisations.

Social Impacts of Artificial Intelligence
Dr Fernando Diaz, a Principal Research Manager at Microsoft Research Montreal, shares his insights on the kinds of questions we need to be asking about artificial intelligence and its impact on society.

Tales from the Crypt(ography) Lab
Dr Kristin Lauter talks about how homomorphic encryption – part of the field of Private AI – allows us to operate on, while still protecting, our most sensitive data.