Digital Safety Content Report
The Digital Safety Content Report covers actions that Microsoft has taken in relation to child sexual exploitation and abuse imagery (CSEAI), terrorist and violent extremist content (TVEC), as well as non-consensual intimate imagery (NCII).
Digital technologies allow people across the globe to share information, news, and opinions – which, together, span the full range of human expression. Unfortunately, some people use online platforms and services to exploit the darkest sides of humanity, which diminishes both safety and the free exchange of ideas.
At Microsoft, we believe digital safety is a shared responsibility requiring a whole-of-society approach. This means that the private sector, academic researchers, civil society, and governmental and intergovernmental actors all work together to address challenges that are too complex – and too important – for any one group to tackle alone.
For our part, we prohibit certain content and conduct, and we enforce rules that we’ve set to help keep our customers safe. We use a combination of automated detection and human content moderation to remove violating content and suspend accounts.
The Microsoft Services Agreement includes a Code of Conduct, which outlines what’s allowed and what’s prohibited when using a Microsoft account. Some services offer additional guidance to show how the Code of Conduct applies to their content, such as the Community Standards for Xbox. Reporting violations of the Code of Conduct is critical to helping keep our online communities safe for everyone. More information on how to report violating content and conduct is included below.
Microsoft takes a balanced approach to addressing TVEC, and we collaborate with other multistakeholder partners – including the Global Internet Forum to Counter Terrorism (GIFCT) – to help inform our policies and practices. Both terrorist and violent extremist content is prohibited on Microsoft platforms and services, and we have signed onto the Christchurch Call to Action, as part of our commitment to addressing the abuse of technology to spread such content. We have also joined with others in industry, committing to nine steps for individual and collective action to prevent and respond to TVEC.
Microsoft is a founding member of the GIFCT and helped shape its transition into an independent organization. The GIFCT’s multistakeholder Independent Advisory Committee advises the primary governing Operating Board on priorities and assesses performance. In 2020, Microsoft served as the Chair of the GIFCT Operating Board.
Via the GIFCT, Microsoft supports practical and academic research into terrorists’ and violent extremists’ abuse of technology. In addition, we served as the chief architect of the GIFCT’s Content Incident Protocol (CIP), designed to thwart the spread of violating content across GIFCT member platforms during a terrorist or violent extremist event. Microsoft also developed the operational structure for the GIFCT’s six substantive working groups and currently serves as the facilitator of the GIFCT’s Crisis Response Working Group.
Detection and enforcement related to TVEC
The Microsoft Services Agreement Code of Conduct prohibits the “posting [of] terrorist or violent extremist content.” We encourage the reporting of content posted by – or in support of – a terrorist organization that depicts graphic violence, encourages violent action, endorses a terrorist organization or its acts, or encourages people to join such groups. We review these reports; take action on content; and, if necessary, suspend accounts associated with violations of our Code of Conduct. In addition, we leverage a variety of tools, including hash-matching technology and other forms of proactive detection, to detect terrorist and violent extremist content. The GIFCT also publishes its own annual transparency report, including information on the hash-sharing database.
During the period, for our hosted consumer services and products – such as OneDrive, Outlook, Skype and Xbox – Microsoft actioned 2,458 pieces of content and suspended 2,346 accounts associated with TVEC. Microsoft detected 99.1% of the content that was actioned, while the remainder was reported to Microsoft by users or third parties. Of the accounts suspended for TVEC, 11.2% were reinstated upon appeal, typically retaining a block on the violating content.
Note - Data in this report represents July - December 2020 and is inclusive of Microsoft hosted consumer products and services including OneDrive, Outlook, Skype and Xbox. This report does not include data representing LinkedIn or GitHub which have their own transparency reports.
When we refer to “hosted consumer products,” we are talking about Microsoft products and services where Microsoft hosts content generated or uploaded by credentialed users (i.e., those logged into a Microsoft account). Examples of these products and services include OneDrive, Skype, Outlook and Xbox.
For this report, “content actioned” refers to when we remove a piece of user-generated content from our products and services and/or block user access to a piece of user-generated content.
For purposes of Bing, “content actioned” may also mean filtering or de-listing a URL from the search engine index.
When Microsoft suspends an account, we remove the user’s ability to access the service account either permanently or temporarily.
“Proactive detection” refers to Microsoft-initiated flagging of content on our products or services, whether through automated or manual review.
Microsoft uses scanning technologies (e.g., PhotoDNA or MD5) and other AI-based technologies, such as text-based classifiers, image classifiers, and the grooming detection technique.
The Digital Safety Content Report focuses on three areas of digital safety content: child sexual exploitation and abuse imagery, terrorist and violent extremist content, and non-consensual intimate imagery. This report also provides information on some of our methods to address these areas.
The Content Removal Requests Report focuses on copyright removal requests, “right to be forgotten” requests, and government requests for content removal.
Microsoft both contributes hashes to and consumes some hashes from the GIFCT industry hash-sharing database. We have been contributing hashes since the database become operational in April 2016 and started ingesting hashes in the summer of 2017.
Microsoft leverages hashes to detect duplicates of known terrorist and violent extremist content on our hosted consumer services. Microsoft determines whether to action matching content according to our own Microsoft Services Agreement, Code of Conduct, and/or community guidelines.
For more information on the GIFCT hash-sharing database, including information on total number of hashes and breakdown by type, please refer to the annual GIFCT transparency report.
Our Bing search engine strives to be an unbiased information and action tool, presenting links to all relevant information available on the Internet. Therefore, we will remove links to terrorist-related content from Bing only when that takedown is required of search providers under local law. Government requests for content removal is reported as part of our Content Removal Requests Report.
In addition to in-product reporting tools, users can report potential terrorist or violent extremist content on Microsoft products and services via this link.