Digital Safety Content Report
The Digital Safety Content Report covers actions that Microsoft has taken in relation to child sexual exploitation and abuse imagery (CSEAI), terrorist and violent extremist content (TVEC), as well as non-consensual intimate imagery (NCII).
Digital technologies allow people across the globe to share information, news, and opinions – which, together, span the full range of human expression. Unfortunately, some people use online platforms and services to exploit the darkest sides of humanity, which diminishes both safety and the free exchange of ideas.
At Microsoft, we believe digital safety is a shared responsibility requiring a whole-of-society approach. This means that the private sector, academic researchers, civil society, and governmental and intergovernmental actors all work together to address challenges that are too complex – and too important – for any one group to tackle alone.
For our part, we prohibit certain content and conduct, and we enforce rules that we’ve set to help keep our customers safe. We use a combination of automated detection and human content moderation to remove violating content and suspend accounts.
The Microsoft Services Agreement includes a Code of Conduct, which outlines what’s allowed and what’s prohibited when using a Microsoft account. Some services offer additional guidance to show how the Code of Conduct applies to their content, such as the Community Standards for Xbox. Reporting violations of the Code of Conduct is critical to helping keep our online communities safe for everyone. More information on how to report violating content and conduct is included below.
Microsoft is a member of the WePROTECT Global Alliance, the multistakeholder organization fighting child sexual exploitation and abuse online, and represents the technology industry on the WePROTECT board. Microsoft also supports the Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse and works closely with WePROTECT to promote them.
Microsoft is a founding member of the Technology Coalition, the tech industry’s non-profit association to combat online child sexual exploitation and abuse. As part of this Coalition, we’ve supported Project Protect, an industry initiative launched in 2020 that includes a multi-million dollar investment into research and innovation to prevent online child sexual exploitation and abuse.
We also support and/or hold leadership and advisory roles with numerous other child safety organizations, including ConnectSafely.org, the Family Online Safety Institute, INHOPE, the Internet Watch Foundation, the Marie Collins Foundation, Power of Zero, and Thorn, to name a few.
Detection and removal of child sexual exploitation and abuse imagery (CSEAI)
We leverage a variety of tools to detect CSEAI, including hash-matching technology (e.g., PhotoDNA) and other forms of proactive detection. Microsoft has also made available in-product reporting for products such as OneDrive, Skype, Xbox, and Bing, whereby users can report suspected CSEAI or other violating content.
Microsoft removes content that contains apparent CSEAI. As a U.S.-based company, Microsoft reports all apparent CSEAI to the National Center for Missing and Exploited Children (NCMEC) via the CyberTipline, as required by U.S. law. During the period of January to June 2020, Microsoft submitted 32,622 reports to NCMEC. We suspend the account(s) associated with the content we have reported to NCMEC for CSEAI or child sexual grooming violations.
For our hosted consumer services and products – such as OneDrive, Outlook, Skype and Xbox – Microsoft actioned 84,581 pieces of content and suspended 19,922 consumer accounts associated with CSEAI during this period. Microsoft detected 99.8% of the content that was actioned, while the remainder was reported to Microsoft by users or third parties. Of the accounts suspended for CSEAI, 0.01% were reinstated upon appeal.
For the Bing search engine, Microsoft works to prevent CSEAI from entering the search index by leveraging block lists of sites containing CSEAI identified by credible agencies and through PhotoDNA scanning. During this reporting period, Microsoft actioned 718,908 pieces of content, with 99.8% detected through PhotoDNA scanning and other proactive measures.
Note - Data in this report represents the period January – June 2020 and is inclusive of Microsoft consumer products and services including OneDrive, Outlook, Skype, Xbox and Bing. This report does not include data representing LinkedIn or Github which have their own transparency reports.
When we refer to “hosted consumer products,” we are talking about Microsoft products and services where Microsoft hosts content generated or uploaded by credentialed users (i.e., those logged into a Microsoft account). Examples of these products and services include OneDrive, Skype, Outlook and Xbox.
For this report, “content actioned” refers to when we remove a piece of user-generated content from our products and services and/or block user access to a piece of user-generated content.
For purposes of Bing, “content actioned” may also mean filtering or de-listing a URL from the search engine index.
When Microsoft suspends an account, we remove the user’s ability to access the service account either permanently or temporarily.
“Proactive detection” refers to Microsoft-initiated flagging of content on our products or services, whether through automated or manual review.
Microsoft uses scanning technologies (e.g., PhotoDNA or MD5) and other AI-based technologies, such as text-based classifiers, image classifiers, and the grooming detection technique.
The Digital Safety Content Report focuses on three areas of digital safety content: child sexual exploitation and abuse imagery, terrorist and violent extremist content, and non-consensual intimate imagery. This report also provides information on some of our methods to address these areas.
The Content Removal Requests Report focuses on copyright removal requests, “right to be forgotten” requests, and government requests for content removal.
In 2009, Microsoft partnered with Dartmouth College to develop PhotoDNA, a technology that aids in finding and removing known images of child sexual exploitation and abuse.
PhotoDNA creates a unique digital signature (known as a “hash”) of an image which is then compared against signatures (hashes) of other photos to find copies of the same image. When matched with a database containing hashes of previously identified illegal child sexual abuse images, PhotoDNA helps detect, disrupt, and report the distribution of child sexual exploitation material. PhotoDNA is not facial recognition software and cannot be used to identify a person or an object in an image. A PhotoDNA hash is not reversible, meaning it cannot be used to recreate an image.
Microsoft has made PhotoDNA freely available to qualified organizations, including technology companies, law enforcement agencies, developers, and non-profit organizations.
More information can be found on the PhotoDNA site.
As explained by the National Center for Missing & Exploited Children (NCMEC), the CyberTipline “is the nation’s centralized reporting system” through which “the public and electronic service providers can make reports of suspected online enticement of children for sexual acts, extra-familial child sexual molestation, child pornography, child sex tourism, child sex trafficking, unsolicited obscene materials sent to a child, misleading domain names, and misleading words or digital images on the internet.”
As a US-based company, Microsoft reports all apparent CSEAI to NCMEC, as required by U.S. law. According to NCMEC, staff review each tip to work to find a potential location for the incident reported so that it may be made available to the appropriate law enforcement agency across the globe.,. A CyberTip report to NCMEC can include one or multiple items.