This is the Trace Id: 6e2f41cf25faef09c1c32eb8c82cbb70
Abstract shapes and colors representing transparency.

2024 Responsible AI Transparency Report

As a company at the forefront of AI research and technology, we are committed to sharing our responsible AI practices as they evolve.

Developing safe, secure, and trustworthy AI

In this inaugural Responsible AI Transparency Report, we provide insight into how we build applications that use generative AI, make decisions and oversee the deployment of those applications, support our customers as they build their own generative applications, and learn, evolve, and grow as a responsible AI community.
Abstract colors and shapes representing the concept of transparency.

How we build generative AI systems responsibly

AI is poised to shape the future. Generative AI has accelerated this transformation. Learn more about our recent innovations building on the NIST AI Risk Management Framework to govern, map, measure, and manage risks associated with generative AI.

Govern

Putting responsible AI into practice begins with our Responsible AI Standard, which details how to integrate responsible AI into engineering teams, the AI development lifecycle, and tooling.

Map

Mapping risks is a first step toward measuring and managing risks associated with AI systems. Mapping informs decisions about planning, safeguards, and the appropriate use of a generative AI system.

Measure

We’ve implemented procedures to measure AI risk and related impacts to inform how we manage these considerations when developing and using generative AI systems.

Manage

We manage identified risks at the platform- and application-level. We also work to safeguard against previously unknown risks through monitoring, feedback, and incident response systems.

How we make decisions

For each stage of the map, measure, and manage process for generative AI releases, we’ve built best practices, guidelines, and tools that reflect our learnings from the last year of releasing generative AI systems.
A woman types at a laptop computer surrounded by

Deployment safety for generative AI applications

Safely deploying Copilot Studio

Copilot Studio harnesses generative AI to enable customers without programming or AI skills to build copilots. As with all generative AI systems, the Copilot Studio engineering team mapped, measured, and managed risks according to our governance framework to ensure safety prior to deployment.

Safely deploying GitHub Copilot

GitHub Copilot is an AI-powered tool designed to increase developer productivity. In developing the features for GitHub Copilot, the team worked with their Responsible AI Champions to map, measure, and manage risks associated with using generative AI in the context of coding.

Sensitive Uses program

Read about how one of our products, Copilot for Security, mapped, measured, and managed risks with guidance from the Sensitive Uses team.
Abstract colors and shapes representing the concept of transparency.

How we support our customers

In addition to building our own AI systems responsibly, we empower our customers with responsible AI tools and features. We share Customer Commitments, tools to support responsible development and provide transparency.

AI Customer Commitments

In June 2023, we announced our AI Customer Commitments, outlining steps to support our customers on their responsible AI journey.

Tools to support responsible development

We’ve released 30 responsible AI tools that include more than 100 features to support customers’ responsible AI development. These tools work to map and measure AI risks and manage identified risks with novel mitigations, real-time detection and filtering, and ongoing monitoring.

Transparency to support responsible development and use

We provide documentation to our customers about our AI applications’ capabilities, limitations, intended uses and more.

How we learn, evolve, and grow

From our growing internal community to the global responsible AI ecosystem, we continue to push forward what’s possible in developing AI systems responsibly. Here, we share our approach to learning, evolving, and growing by bringing outside perspectives in and sharing learnings.
Colleagues collaborate over a laptop in an office environment.

Governance of responsible AI

At Microsoft, no one team or organization can be solely responsible for embracing and enforcing the adoption of responsible AI practices.

External partnerships

We partner with governments, civil society organizations, academics, and others to advance responsible AI.

Supporting AI research

Academic research and development can help realize the potential of AI. We’ve committed support to various programs and regularly publish research to advance the state of the art in responsible AI.

Tuning in to global perspectives

In 2023, we worked with more than 50 internal and external groups to better understand how AI innovation may impact regulators and individuals in developing countries.

Explore Responsible AI at Microsoft

Earn trust

We’re committed to advancing cybersecurity and digital safety, leading the responsible use of AI, and protecting privacy.

Responsible AI

We are committed to the advancement of AI driven by ethical principles.
Follow Microsoft