Digital technologies allow people across the globe to share information, news, and opinions – that, together, span a broad range of human expression. Unfortunately, some people use online platforms and services to exploit the darkest sides of humanity, which diminishes both safety and the free exchange of ideas.

At Microsoft, we believe digital safety is a shared responsibility requiring a whole-of-society approach. This means that the private sector, academic researchers, civil society, and governmental and intergovernmental actors all work together to address challenges that are too complex – and too important – for any one group to tackle alone. 

For our part, we prohibit certain content and conduct on our services, and we enforce rules that we’ve set to help keep our customers safe. We use a combination of automated detection and human content moderation to remove violating content and suspend accounts. Additional information is available on Microsoft’s Digital Safety site.

The Microsoft Services Agreement includes a Code of Conduct, which outlines what’s allowed and what’s prohibited when using a Microsoft account. Some services offer additional guidance, such as the Community Standards for Xbox, to show how the Code of Conduct applies on their services. Reporting violations of the Code of Conduct is critical to helping keep our online communities safe for everyone. More information on how to report violating content and conduct is included below.

Addressing terrorist and violent extremist content

Practices

At Microsoft, we recognize that we have an important role to play in helping to prevent terrorists and violent extremists from exploiting digital platforms. We also have a responsibility to manage our services in a way that respects fundamental values such as safety, privacy, and freedom of expression. Microsoft strives to take a balanced approach to addressing terrorist or violent extremist content (TVEC) on our hosted consumer services. As specified in Microsoft’s Code of Conduct and on our Digital Safety site, we do not allow content that praises or supports terrorists or violent extremists, helps them to recruit, or encourages or enables their activities. We look to the United Nations Security Council’s Consolidated List to identify terrorists or terrorist groups. Violent extremists include people who embrace an ideology of violence or violent hatred towards another group.

We also collaborate with other multistakeholder partners – including the Global Internet Forum to Counter Terrorism (GIFCT) – to help inform our policies and practices, and we have signed onto the Christchurch Call to Action, as part of our commitment to work collectively to eliminate terrorist and violent extremist content online.

Microsoft is a founding member of the GIFCT and sits on the GIFCT Operating Board. Via the GIFCT, Microsoft participates in a range of activity, including engagement in its multistakeholder working groups and the GIFCT’s Incident Response processes. In the event the GIFCT and its Operating Board activate a Content Incident or Content Incident Protocol, Microsoft ingests related hashes from the GIFCT’s hash-sharing database. This allows Microsoft to quickly become aware of, assess, and address potential content circulating on its consumer services resulting from an offline terrorist or violent extremist event. For further information, the GIFCT also publishes its own annual transparency report, which includes information on the hash-sharing database.

Processes and systems

Detection and enforcement related to TVEC

We review reports from users and third parties on potential TVEC, take action on content, and, if necessary, take action on accounts associated with violations of our Code of Conduct. Users have the opportunity to appeal these account actions by visiting this webpage and using this web form. In addition, we leverage a variety of tools, including hash-matching technology and other forms of proactive detection, to detect TVEC for subsequent review.

 

Outcomes – January through June 2023

During the period, for our hosted consumer services – such as OneDrive, Outlook, Skype and Xbox – Microsoft actioned 542 pieces of content and 50 accounts associated with TVEC. Microsoft detected 97.4 percent of the content that was actioned through automated technologies, while the remainder was reported to Microsoft by users or third parties. Of the accounts actioned for TVEC, none were reinstated upon appeal.

Note - Data in this report represents January-June 2023 and includes Microsoft hosted consumer services such as OneDrive, Outlook, Skype and Xbox. This report does not include data representing LinkedIn or GitHub which issue their own transparency reports.

 

Select previous Digital Safety Content Report to download

Continue

FAQ

General questions about this report

|

This report addresses Microsoft consumer services including (but not limited to) OneDrive, Outlook, Skype, Xbox and Bing. Xbox also publishes its own transparency report, outlining our approach to safety in gaming. This report does not include data representing LinkedIn or GitHub which issue their own transparency reports.

When we refer to “hosted consumer services,” we are talking about Microsoft services where Microsoft hosts content generated or uploaded by credentialed users (i.e., those logged into a Microsoft account). Examples of these services include OneDrive, Skype, Outlook and Xbox.

For this report, “content actioned” refers to when we remove a piece of user-generated content from our services and/or block user access to a piece of user-generated content.

 

For purposes of Bing, “content actioned” may also mean filtering or de-listing a URL from the search engine index.

For this report, “account actioned” refers to when we suspend or block access to an account, or restrict access to content within the account.

“Proactive detection” refers to Microsoft-initiated flagging of content on our services, whether through automated or manual review.

Microsoft uses scanning technologies (e.g., PhotoDNA or MD5) and other AI-based technologies, such as text-based classifiers, image classifiers, and the grooming detection technique.

Accounts reinstated refers to actioned accounts that were fully restored including content and account access, upon appeal.

|

Microsoft both contributes hashes to and consumes some hashes from the GIFCT industry hash-sharing database. We have been contributing hashes since the database become operational in April 2016 and started ingesting hashes in the summer of 2017.

 

Microsoft leverages hashes to detect duplicates of known terrorist and violent extremist content on our hosted consumer services. Microsoft determines whether to action matching content according to our own Microsoft Services Agreement, Code of Conduct, and/or community guidelines.

 

For more information on the GIFCT hash-sharing database, including information on total number of hashes and breakdown by type, please refer to the annual GIFCT transparency report.

Our Bing search engine strives to be an unbiased information and action tool, presenting links to all relevant information available on the Internet. Therefore, we will remove links to terrorist-related content from Bing only when that takedown is required of search providers under local law. Government requests for content removal is reported as part of our Government Requests for Content Removal Report.