Responsible AI resources
Explore resources designed to help you responsibly use AI at every stage of innovation - from concept to development, deployment, and beyond.
Human-AI interaction guidelines
Use guidelines for designing AI systems across the user interaction and solution lifecycle.
Conversational AI guidelines
Learn how to design bots that put people first and build trust in your services, using guidelines for responsible conversational AI.
Inclusive design guidelines
These guidelines can help you build AI systems that enable and draw on the full range of human diversity.
AI systems can behave unexpectedly for a variety of reasons. These software tools can help you understand the behavior of your AI systems, so that you can better tailor them to your needs.
Fairlearn empowers developers of AI systems to assess their systems' fairness and mitigate any negative impacts for groups of people, such as those defined in terms of race, gender, age, or disability status.
4 lessons on designing responsible, ethical tech
Microsoft is highlighted in a new World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University case study series focused on how companies incorporate ethical thinking into the development of technology.
Insights from Microsoft Research
Microsoft researchers are working with the broader academic community on the advancement of responsible AI practices and technologies. Our research collection page provides an overview of some key areas where our researchers are working towards more responsible and trustworthy AI systems.
Responsible AI with Dr. Eric Horvitz
Dr. Eric Horvitz, Microsoft Chief Scientific Officer, talks with Sam Charrington on the TWIML podcast, diving into how responsible AI is a critical part of innovation across organizations.
Responsible AI with Brad Smith
Brad Smith, president of Microsoft, talks about his personal career journey, six core RAI principles at Microsoft, top ten tech issues for the next decade, and more.
Potential and Pitfalls of AI
Microsoft Chief Scientific Officer Dr. Eric Horvitz talks about his journey in Microsoft Research, the potential and pitfalls he sees in AI, how AI can help countries like India, and much more.
Cryptography for the Post-Quantum World
Dr. Brian LaMacchia gives us an inside look at the world of cryptography and the number theory behind it.
Dr. Meredith Ringel Morris explores ethical challenges such as inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered in the development of AI.
Dr. Ken Hinckley and Dr. Meredith Ringel Morris identify potential areas of concern regarding how several AI technology categories may impact particular disability constituencies if care is not taken in their design, development, and testing.
Responsible AI with Dr. Saleema Amershi
Dr. Amershi talks about life at the intersection of AI and HCI and does a little AI myth-busting. She also gives us an overview of what – and who – it takes to build responsible AI systems and how a personal desire to make her own life easier may make your life easier too.
Transparency and Intelligibility
Explore how to best incorporate transparency into the machine learning life cycle in this webinar led by Dr. Jenn Wortman Vaughan, a Senior Principal Researcher at Microsoft. You will learn about traceability, communication, and intelligibility—as well as the importance of taking a human-centered approach.
Life at the Intersection of AI & Society
Dr. Ece Kamar, a senior researcher in the Adaptive Systems and Interaction Group at Microsoft Research, is working to help us understand AI’s far-reaching implications. She talks about the complementarity between humans and machines, debunks some common misperceptions about AI, and more.
Responsible AI in Practice with Sarah Bird
Dr. Sarah Bird, Principal Program Manager at Microsoft, meets with Sam Charrington from TWiML to discuss a handful of new tools from Microsoft focused on responsible machine learning.
Human-Centered Design with Mira Lane
Mira Lane, Partner Director for Ethics and Society at Microsoft meets with Sam Charrington from TWiML and discusses how she defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be implemented across large engineering organizations.
Social Impacts of Artificial Intelligence
Dr. Fernando Diaz, a Principal Research Manager at Microsoft Research Montreal, shares his insights on the kinds of questions we need to be asking about artificial intelligence and its impact on society.
Tales from the Crypt(ography) Lab
Dr. Kristin Lauter talks about how homomorphic encryption – part of the field of Private AI – allows us to operate on, while still protecting, our most sensitive data.
Keeping an Eye on AI
Dr. Kate Crawford talks about both the promises and problems of AI, why bigger isn’t necessarily better, and how we can adopt AI design principles that empower people to shape their technical tools.