Putting principles into practice at Microsoft
We are committed to making sure AI systems are developed responsibly and in ways that warrant people’s trust.
Microsoft responsible AI principles
Operationalizing responsible AI
We are operationalizing responsible AI across Microsoft through a central effort led by the Aether Committee, the Office of Responsible AI (ORA) and Responsible AI Strategy in Engineering (RAISE). Together, Aether, ORA, and RAISE work closely with our teams to uphold Microsoft’s responsible AI principles in their day-to-day work.

Governance
Setting the company-wide rules for enacting responsible AI, as well as defining roles and responsibilities for teams involved in this effort.

Overview
Aether was established at Microsoft in 2017. Our senior leadership relies on Aether to make recommendations on responsible AI issues, technologies, processes, and best practices. Its working groups undertake research and development, and provide advice on rising questions, challenges, and opportunities.

Building responsible AI tooling & systems
RAISE defines and executes the tooling and system strategy for responsible AI across engineering teams. They are developing One Engineering System (1ES)—a set of tools and systems built on Azure ML that will help customers adopt responsible AI practices and internal engineering groups implement Microsoft companywide rules for responsible AI.

Responsible AI at Microsoft
We’ve developed six core principles that guide our approach to responsible AI.

Establish a responsible AI strategy
Learn how to develop our own responsible AI strategy and principles based on the values of your organization.

Design, build, and manage your AI solution
We are developing resources to help organizations put responsible AI principles into practice.