{"id":680358,"date":"2020-04-13T09:00:00","date_gmt":"2020-04-13T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=680358"},"modified":"2023-02-10T15:19:16","modified_gmt":"2023-02-10T23:19:16","slug":"research-collection-research-supporting-responsible-ai","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/research-collection-research-supporting-responsible-ai\/","title":{"rendered":"Research Collection: Research Supporting Responsible AI"},"content":{"rendered":"\n<p class=\"has-small-font-size\"><strong class=\"\">Editor\u2019s Note<\/strong><em>: In the diverse and multifaceted world of research, individual contributions can add up to significant results over time. In this new series of posts, we\u2019re connecting the dots to provide an overview of how researchers at Microsoft and their collaborators are working towards significant customer and societal outcomes that are broader than any single discipline. Here, we\u2019ve curated a selection of the work Microsoft researchers are doing to advance responsible AI. Researchers&nbsp;<\/em><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/samershi\/\">Saleema Amershi<\/a><em>,&nbsp;<\/em><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/eckamar\/\">Ece Kamar<\/a><em>,&nbsp;<\/em><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/klauter\/\">Kristin Lauter<\/a><em>,&nbsp;<\/em><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jenn\/\">Jenn Wortman Vaughan<\/a><em>, and&nbsp;<\/em><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wallach\/\">Hanna Wallach<\/a><em>&nbsp;contributed to this post.<\/em><\/p>\n\n\n\n<p>Microsoft is committed to the advancement and use of AI grounded in principles that put people first and benefit society. We are putting these principles into practice throughout the company by embracing diverse perspectives, fostering continuous learning, and proactively responding as AI technology evolves.<\/p>\n\n\n\n<p>Researchers at Microsoft are making significant contributions to the advancement of responsible AI practices, techniques and technologies \u2013 spanning areas of human-AI interaction and collaboration, fairness, intelligibility and transparency, privacy, reliability and safety, and other areas of research.<\/p>\n\n\n\n<p>Multiple research efforts on responsible AI at Microsoft have been supported and coordinated by the company\u2019s Aether Committee and its set of expert working groups. Aether is a cross-company board that plays a key role in the company\u2019s work to operationalize responsible AI at scale, with efforts on formulating and making recommendations on issues and processes\u2014and with hosting deep dives on technical challenges and tools around responsible AI.<\/p>\n\n\n\n<p>Aether working groups focus on important opportunity areas, including&nbsp;<em>human-AI interaction and collaboration, bias and fairness, intelligibility and transparency, reliability and safety, engineering practices, and sensitive uses of AI<\/em>. Microsoft researchers actively lead and participate in the work of Aether, conducting research across disciplines and engaging with organizations and experts inside and outside of the company.<\/p>\n\n\n\n<p>We embrace open collaboration across disciplines to strengthen and accelerate responsible AI, spanning software engineering and development to social sciences, user research, law and policy. To further this collaboration, we open-source many tools and datasets that others can use to contribute and build upon.<\/p>\n\n\n\n<p>This work builds on Microsoft\u2019s long history of innovation to make computing more accessible and dependable for people around the world \u2013 including the creation of the Microsoft Security Development Lifecycle, the Trustworthy Computing initiative, and pioneering work in accessibility and localization.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-style-default is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em class=\"\">Responsible AI is really all about the how: how do we design, develop and deploy these systems that are fair, reliable, safe and trustworthy. And to do this, we need to think of Responsible AI as a set of socio-technical problems. We need to go beyond just improving the data and models. We also have to think about the people who are ultimately going to be interacting with these systems.\u201d<\/em><\/p>\n<cite>\u2014<em>Dr. Saleema Amershi, Principal Researcher at Microsoft Research and Co-chair of the Aether Human-AI Interaction & Collaboration Working Group<\/em><\/cite><\/blockquote>\n\n\n\n<p>This page provides an overview of some key areas where Microsoft researchers are contributing to more responsible, secure and trustworthy AI systems. For more perspective on responsible AI and other technology and policy issues, check out our&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/an-interview-with-microsoft-president-brad-smith\/\">podcast<\/a>&nbsp;with Microsoft President and Chief Legal Officer Brad Smith. For background on the Aether Committee, listen to this&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-india\/articles\/podcast-potential-and-pitfalls-of-ai-with-dr-eric-horvitz\/\">podcast<\/a>&nbsp;with Microsoft\u2019s Chief Scientist and Aether chair Eric Horvitz.<\/p>\n\n\n\n<p>This is by no means an exhaustive list of efforts; read our&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/\">blog<\/a>, listen to our&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/category\/podcast\/\">podcast<\/a>, and subscribe to our&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/note.microsoft.com\/ww-registration-microsoft-research-newsletter-s.html?wt.mc_id=F-\" target=\"_blank\" rel=\"noopener noreferrer\">newsletter<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;to stay up to date on all things research at Microsoft.<\/p>\n\n\n\n<p>Learn more about&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/www.microsoft.com\/en-us\/ai\/responsible-ai?activetab=pivot1%3aprimaryr6\" target=\"_blank\">Microsoft\u2019s commitment to responsible AI<\/a>. For more guidelines and tools to help responsibly use AI at every stage of innovation, visit the&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/www.microsoft.com\/en-us\/ai\/responsible-ai-resources?activetab=pivot1%3aprimaryr4\" target=\"_blank\">Responsible AI resource center<\/a>.<\/p>\n\n\n<aside id=accordion-a9dfb835-5547-4527-ade1-25fd37e697a5 class=\"msr-table-of-contents-block accordion mb-5 pb-0\" data-bi-aN=\"table-of-contents\">\n\t<button class=\"btn btn-collapse bg-gray-100 mb-0 display-flex justify-content-between\" type=\"button\" data-mount=\"collapse\" data-target=\"#accordion-collapse-a9dfb835-5547-4527-ade1-25fd37e697a5\" aria-expanded=\"true\" aria-controls=\"accordion-collapse-a9dfb835-5547-4527-ade1-25fd37e697a5\">\n\t\t<span class=\"msr-table-of-contents-block__label subtitle\">In this article<\/span>\n\t\t<span class=\"msr-table-of-contents-block__current mr-4 text-gray-600 font-weight-normal\" aria-hidden=\"true\"><\/span>\n\t<\/button>\n\t<div id=\"accordion-collapse-a9dfb835-5547-4527-ade1-25fd37e697a5\" class=\"msr-table-of-contents-block__collapse-wrapper collapse show\" data-parent=\"#accordion-a9dfb835-5547-4527-ade1-25fd37e697a5\">\n\t\t<div class=\"accordion-body bg-gray-100 border-top pt-4\">\n\t\t\t<ol class=\"msr-table-of-contents-block__list\">\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#fairness\" class=\"msr-table-of-contents-block__list-item-link\">Fairness<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#transparency-and-intelligibility\" class=\"msr-table-of-contents-block__list-item-link\">Transparency and Intelligibility<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#reliability-and-safety\" class=\"msr-table-of-contents-block__list-item-link\">Reliability and Safety<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#human-ai-interaction-and-collaboration\" class=\"msr-table-of-contents-block__list-item-link\">Human-AI Interaction and Collaboration<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#private-ai\" class=\"msr-table-of-contents-block__list-item-link\">Private AI<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#partnerships-and-support-for-student-work\" class=\"msr-table-of-contents-block__list-item-link\">Partnerships and Support for Student Work<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#cited-publications\" class=\"msr-table-of-contents-block__list-item-link\">Cited publications<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t<\/ul>\n\t\t<\/div>\n\t<\/div>\n\t<span class=\"msr-table-of-contents-block__progress-bar\"><\/span>\n<\/aside>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"670821\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">Spotlight: Microsoft research newsletter<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/info.microsoft.com\/ww-landing-microsoft-research-newsletter.html\" aria-label=\"Microsoft Research Newsletter\" data-bi-cN=\"Microsoft Research Newsletter\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/09\/Newsletter_Banner_08_2019_v1_1920x1080.png\" alt=\"\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">Microsoft Research Newsletter<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"microsoft-research-newsletter\" class=\"large\">Stay connected to the research community at Microsoft.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button is-style-fill-chevron\">\n\t\t\t\t\t\t<a href=\"https:\/\/info.microsoft.com\/ww-landing-microsoft-research-newsletter.html\" aria-describedby=\"microsoft-research-newsletter\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"Microsoft Research Newsletter\" target=\"_blank\">\n\t\t\t\t\t\t\tSubscribe today\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h3 id=\"fairness\">Fairness<\/h3>\n\n\n\n<p>The fairness of AI systems is crucially important now that AI plays an increasing role in our daily lives. That\u2019s why Microsoft researchers are advancing the frontiers of research on this topic, focusing on many different aspects of fairness, including:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Definitions<\/strong>: Different types of&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.youtube.com\/watch?v=fMym_BKWQzk\" target=\"_blank\" rel=\"noopener noreferrer\">fairness-related harms<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;that occur in the context of AI, including&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/toward-fairness-in-ai-for-people-with-disabilities-a-research-roadmap\/\">harms that are specific to people with disabilities<\/a>&nbsp;and harms arising from&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/social-data-biases-methodological-pitfalls-and-ethical-boundaries\/\">data quality issues, methodological pitfalls, and ethical limitations<\/a>.<\/li>\n\n\n\n<li><strong>Development practices<\/strong>: Ways to make fairness a priority throughout the AI development and deployment lifecycle by&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/improving-fairness-in-machine-learning-systems-what-do-industry-practitioners-need\/\">identifying industry practitioners\u2019 needs<\/a>&nbsp;for support in developing fairer AI systems and understanding&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/co-designing-checklists-to-understand-organizational-challenges-and-opportunities-around-fairness-in-ai\/\">organizational challenges and opportunities<\/a>&nbsp;around fairness in AI.<\/li>\n\n\n\n<li><strong>Applications<\/strong>: Fairness-related harms in natural language processing and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/facts-ir-fairness-accountability-confidentiality-transparency-and-safety-in-information-retrieval\/\">information retrieval<\/a>, such as&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/quantifying-reducing-stereotypes-word-embeddings\/\">gender stereotypes reflected in word embeddings<\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.meetup.com\/SEA-Search-Engines-Amsterdam\/events\/tpklmrybccbpc\/\" target=\"_blank\" rel=\"noopener noreferrer\">problematic predictive text<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/algorithmic-greenlining-an-approach-to-increase-diversity\/\">homogenous search results<\/a>, as well as ways to&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/whats-in-a-name-reducing-bias-in-bios-without-access-to-protected-attributes\/\">leverage some of these harms to achieve fairer outcomes<\/a>&nbsp;in other tasks.<\/li>\n\n\n\n<li><strong>The law<\/strong>: The relationship between AI and the law, including tensions between antidiscrimination laws and the use of AI systems in employment from both a&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/stretching-human-laws-to-apply-to-machines-the-dangers-of-a-colorblind-computer\/\">disparate treatment<\/a>&nbsp;and a&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=2477899\" target=\"_blank\" rel=\"noopener noreferrer\">disparate impact<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;perspective.<\/li>\n<\/ul>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--right\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Tool<\/span>\n\t\t\t<a href=\"https:\/\/github.com\/fairlearn\/fairlearn\" data-bi-cN=\"Fairlearn\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Fairlearn<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>For those who wish to prioritize fairness in their own AI systems, Microsoft researchers, in collaboration with Azure ML, have released&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"https:\/\/github.com\/fairlearn\/fairlearn\" target=\"_blank\">Fairlearn<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, an open-source Python package that enables developers of AI systems to assess their systems\u2019 fairness and mitigate any negative impacts for groups of people, such as those defined in terms of race, gender, age, or disability status. Fairlearn, which focuses specifically on harms of allocation or quality of service, draws on two papers by Microsoft researchers on incorporating quantitative fairness metrics into&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-reductions-approach-to-fair-classification\/\">classification settings<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/fair-regression-quantitative-definitions-and-reduction-based-algorithms\/\">regression settings<\/a>, respectively. Of course, even with precise, targeted software tools like Fairlearn, it\u2019s still easy for teams to overlook fairness considerations, especially when they are up against tight deadlines. This is especially true because fairness in AI&nbsp; sits at the intersection of technology and society and can\u2019t be addressed with purely technical approaches. Microsoft researchers have therefore&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/co-designing-checklists-to-understand-organizational-challenges-and-opportunities-around-fairness-in-ai\/\">co-designed a fairness checklist<\/a>&nbsp;to help teams reflect on their decisions at every stage of the AI lifecycle, in turn helping them anticipate fairness issues well before deployment.<\/p>\n\n\n\n<h4 id=\"explore-more\" class=\"alignwide has-text-align-wide\">Explore more<\/h4>\n\n\n\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Blog<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/machine-learning-for-fair-decisions\/\" data-bi-cN=\"Machine Learning for fair decisions\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Machine Learning for fair decisions<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Blog<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/whats-in-a-name-using-bias-to-fight-bias-in-occupational-classification\/\" data-bi-cN=\"What\u2019s in a name? Using Bias to Fight Bias in Occupational Classification\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>What\u2019s in a name? Using Bias to Fight Bias in Occupational Classification<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Podcast<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/advancing-accessibility-dr-meredith-ringel-morris\/\" data-bi-cN=\"Advancing accessibility with Dr. Meredith Ringel Morris\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Advancing accessibility with Dr. Meredith Ringel Morris<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Podcast<\/span>\n\t\t\t<a href=\"https:\/\/twimlai.com\/twiml-talk-232-fairness-in-machine-learning-with-hanna-wallach\/\" data-bi-cN=\"Fairness in Machine Learning with Hanna Wallach, Co-chair of the Aether Bias and Fairness Working Group\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Fairness in Machine Learning with Hanna Wallach, Co-chair of the Aether Bias and Fairness Working Group<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/co-designing-checklists-to-understand-organizational-challenges-and-opportunities-around-fairness-in-ai\/\" data-bi-cN=\"Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">CHI 2020 Best Paper Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Tutorial<\/span>\n\t\t\t<a href=\"https:\/\/www.youtube.com\/watch?v=UicKZv93SOY\" data-bi-cN=\"Challenges of Incorporating Algorithmic \u2018Fairness\u2019 into Practice\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Challenges of Incorporating Algorithmic \u2018Fairness\u2019 into Practice<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">(cross-industry collaboration with researchers from Spotify and CMU)<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Tutorial<\/span>\n\t\t\t<a href=\"https:\/\/www.youtube.com\/watch?v=1G5djbIK7u8\" data-bi-cN=\"Fairness-aware Machine Learning: Practical Challenges and Lessons Learned\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Fairness-aware Machine Learning: Practical Challenges and Lessons Learned<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">(cross-industry collaboration with researchers from Google and LinkedIn)<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Webinar<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/machine-learning-and-fairness-webinar\/\" data-bi-cN=\"Machine Learning and Fairness with Dr. Jennifer Wortman Vaughan and Dr. Hanna Wallach\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Machine Learning and Fairness with Dr. Jennifer Wortman Vaughan and Dr. Hanna Wallach<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-text-color has-teal-color has-css-opacity has-teal-background-color has-background is-style-dots\"\/>\n\n\n\n<h3 id=\"transparency-and-intelligibility\">Transparency and Intelligibility<\/h3>\n\n\n\n<p>Intelligibility can uncover potential sources of unfairness, help users decide how much trust to place in a system, and generally lead to more usable products. It also can improve the robustness of machine learning systems by making it easier for data scientists and developers to identify and fix bugs. Because intelligibility is a fundamentally human concept, it\u2019s crucial to take a&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.jennwv.com\/papers\/intel-chapter.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">human-centered approach<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;to designing and evaluating methods for achieving intelligibility.&nbsp; That\u2019s why Microsoft researchers are&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/manipulating-and-measuring-model-interpretability\/\">questioning common assumptions<\/a>&nbsp;about what makes a model \u201cinterpretable,\u201d studying&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.jennwv.com\/papers\/interp-ds.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">data scientists\u2019 understanding and use<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;of existing intelligibility tools and how to make these tools&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/gamut-a-design-probe-to-understand-howdata-scientists-understand-machine-learning-models\/\">more useable<\/a>, and exploring the intelligibility of&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/understanding-the-effect-of-accuracy-on-trust-in-machine-learning-models\/\">common metrics like accuracy<\/a>.<\/p>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--right\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Tool<\/span>\n\t\t\t<a href=\"https:\/\/github.com\/interpret-ml\" data-bi-cN=\"InterpretML\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>InterpretML<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>For those eager to incorporate intelligibility into their own pipeline, Microsoft researchers have released&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/interpretml\/interpret\" target=\"_blank\" rel=\"noopener noreferrer\">InterpretML<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, an open-source Python package that exposes common model intelligibility techniques to practitioners and researchers. InterpretML includes implementations of both \u201cglassbox\u201d models (like Explainable Boosting Machines, which&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/accurate-intelligible-models-pairwise-interactions\/\">build on Generalized Additive Models<\/a>) and techniques for generating explanations of blackbox models (like the popular LIME and SHAP, both developed by current Microsoft researchers).<\/p>\n\n\n\n<p>Beyond model intelligibility, a thorough understanding of the characteristics and origins of the data used to train a machine learning model can be fundamental to building more responsible AI. The&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/datasheets-for-datasets\/\">Datasheets for Datasets<\/a>&nbsp;project proposes that every dataset be accompanied by a datasheet that documents relevant information about its creation, key characteristics, and limitations. Datasheets can help dataset creators uncover possible sources of bias in their data or unintentional assumptions they\u2019ve made, help dataset consumers figure out whether a dataset is right for their needs, and help end users gain trust.&nbsp; In collaboration with the&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"https:\/\/www.partnershiponai.org\/\" target=\"_blank\">Partnership on AI<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft researchers are developing best practices for documenting all components of machine learning systems to build more responsible AI.<\/p>\n\n\n\n<h4 id=\"explore-more\" class=\"alignwide has-text-align-wide\">Explore more<\/h4>\n\n\n\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Book chapter<\/span>\n\t\t\t<a href=\"http:\/\/www.jennwv.com\/papers\/intel-chapter.pdf\" data-bi-cN=\"A Human-Centered Agenda for Intelligible Machine Learning\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A Human-Centered Agenda for Intelligible Machine Learning<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Partner project<\/span>\n\t\t\t<a href=\"https:\/\/www.partnershiponai.org\/about-ml\/\" data-bi-cN=\"ABOUTML \" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>ABOUTML <\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">with Partnership on AI<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Podcast<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/making-intelligence-intelligible-dr-rich-caruana\/\" data-bi-cN=\"Making Intelligence Intelligible with Dr. Rich Caruana, Co-chair of the Aether Intelligibility and Transparency Working Group\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Making Intelligence Intelligible with Dr. Rich Caruana, Co-chair of the Aether Intelligibility and Transparency Working Group<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Podcast<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/visualizing-data-big-ideas-dr-steven-drucker\/?ocid=msr_podcast_sdrucker_profile\" data-bi-cN=\"Visualizing Data and Other Big Ideas with Dr. Steven Drucker\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Visualizing Data and Other Big Ideas with Dr. Steven Drucker<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Project<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/dice\/\" data-bi-cN=\"DiCE\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>DiCE<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/nam06.safelinks.protection.outlook.com\/?url=https%3A%2F%2Farxiv.org%2Fabs%2F1803.09010&data=02%7C01%7Cmacorwin%40microsoft.com%7C491b6fd128a341fddf8508d7d1ba5793%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637208470569780658&sdata=HhflY51%2F1c6%2F4iv4z1agihCXFbFpwhwkrARC%2Fa5CBFA%3D&reserved=0\" data-bi-cN=\"Datasheets for Datasets\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Datasheets for Datasets<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/gamut-a-design-probe-to-understand-howdata-scientists-understand-machine-learning-models\/\" data-bi-cN=\"Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/interpreting-interpretability-understanding-data-scientists-use-of-interpretability-tools-for-machine-learning\/\" data-bi-cN=\"Interpreting Interpretability: Understanding Data Scientists\u2019 Use of Interpretability Tools for Machine Learning\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Interpreting Interpretability: Understanding Data Scientists\u2019 Use of Interpretability Tools for Machine Learning<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">CHI 2020 Honorable Mention Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Webinar<\/span>\n\t\t\t<a href=\"https:\/\/note.microsoft.com\/MSR-Webinar-Transparency-Throughout-Machine-Learning-Lifecycle-Registration-On-Demand.html\" data-bi-cN=\"Transparency and Intelligibility Throughout the Machine Learning Lifecycle with Dr. Jennifer Wortman Vaughan\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Transparency and Intelligibility Throughout the Machine Learning Lifecycle with Dr. Jennifer Wortman Vaughan<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-text-color has-teal-color has-css-opacity has-teal-background-color has-background is-style-dots\"\/>\n\n\n\n<h3 id=\"reliability-and-safety\">Reliability and Safety<\/h3>\n\n\n\n<p>Reliability is a principle that applies to every AI system that functions in the world and is required for creating trustworthy systems. A reliable system functions consistently and as intended, not only in the lab conditions in which it is trained, but also in the open world or when they are under attack from adversaries. When systems function in the physical world or when their shortcomings can pose risks to human lives, problems in system reliability translate to risks in safety.<\/p>\n\n\n\n<p>To understand the way reliability and safety problems occur in AI systems, our researchers have been investigating how&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/identifying-unknown-unknowns-open-world-representations-policies-guided-exploration\/\">blind spots in data sets<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/discovering-blind-spots-in-reinforcement-learning\/\">mismatches between training environments and execution environments<\/a>, distributional shifts and problems in model specifications can lead to shortcomings in AI systems. Given the various sources for failures, the key to ensuring system reliability is rigorous evaluation during system development and deployment so that unexpected performance failures can be minimized and system developers can be guided for continuous improvement. That is why Microsoft researchers have been developing new techniques for&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-accountable-ai-hybrid-human-machine-analyses-for-characterizing-system-failure\/\">model debugging and error analysis<\/a>&nbsp;that can reveal patterns that are correlated with disproportional error regions in evaluation data. Current efforts in this space include turning research ideas into tools for developers to use.<\/p>\n\n\n\n<p>We recognize that when AI systems are used in applications that are critical for our society, in most cases to support human work, aggregate accuracy is not sufficient to quantify machine performance. Researchers have shown that model updates can lead to issues with&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/creating-better-ai-partners-a-case-for-backward-compatibility\/\">backward compatibility<\/a>&nbsp;(i.e., new errors occurring as a result of an update), even when overall model accuracy improves, which highlights that model performance should be seen as a multi-faceted concept with human-centered considerations.<\/p>\n\n\n\n<h4 id=\"explore-more\" class=\"alignwide has-text-align-wide\">Explore more<\/h4>\n\n\n\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Podcast<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/programming-languages-quietly-run-world-dr-ben-zorn\/\" data-bi-cN=\"How Programming Languages Quietly Run the World with Dr. Ben Zorn\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>How Programming Languages Quietly Run the World with Dr. Ben Zorn<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Podcast<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/life-at-intersection-of-ai-society-ece-kamar\/\" data-bi-cN=\"Life at the Intersection of AI and Society with Dr. Ece Kamar, Co-chair of the Aether Reliability and Safety Group\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Life at the Intersection of AI and Society with Dr. Ece Kamar, Co-chair of the Aether Reliability and Safety Group<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/creating-better-ai-partners-a-case-for-backward-compatibility\/\" data-bi-cN=\"A Case for Backward Compatibility for Human-AI Teams\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A Case for Backward Compatibility for Human-AI Teams<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/discovering-blind-spots-in-reinforcement-learning\/\" data-bi-cN=\"Discovering Blind Spots in Reinforcement Learning\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Discovering Blind Spots in Reinforcement Learning<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/identifying-unknown-unknowns-open-world-representations-policies-guided-exploration\/\" data-bi-cN=\"Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-accountable-ai-hybrid-human-machine-analyses-for-characterizing-system-failure\/\" data-bi-cN=\"Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-text-color has-teal-color has-css-opacity has-teal-background-color has-background is-style-dots\"\/>\n\n\n\n<h3 id=\"human-ai-interaction-and-collaboration\">Human-AI Interaction and Collaboration<\/h3>\n\n\n\n<p>Advances in AI have the potential to enhance human capabilities and improve our lives. At the same time, the complexities and probabilistic nature of AI-based technologies presents unique challenges for safe, fair, and responsible human-AI interaction. That\u2019s why Microsoft researchers are taking a human-centered approach to ensure that&nbsp;<em>what we build<\/em>&nbsp;benefits people and society, and that&nbsp;<em>how we build it&nbsp;<\/em>begins and ends with people in mind.<\/p>\n\n\n\n<p>A human-centered approach to AI starts with identifying a human or societal need and then tailor-making AI technologies to support that need. Taking this approach, Microsoft researchers are creating new AI-based technologies to promote human and societal well-being including technologies to&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-dynamic-ai-system-for-extending-the-capabilities-of-blind-people\/\">augment human capabilities<\/a>, support&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/technology-for-mental-health-and-well-being-interventions\/\">mental health<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/optimizing-for-happiness-and-productivity-modeling-opportune-moments-for-transitions-and-breaks-at-work\/\">focus and attention<\/a>, and to&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2017\/11\/fourney_racz_ranade_et_al_cikm2017.pdf\">understand the circulation patterns of fake news<\/a>.<\/p>\n\n\n\n<p>A human-centered approach to technology development also emphasizes the need for people to effectively understand and control those technologies to achieve their goals. This is inherently difficult for AI technologies that behave in probabilistic ways, may change over time, and are based on possibly multiple complex and entangled models. Microsoft researchers are therefore developing&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/aka.ms\/aiguidelines\">guidance<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;and exploring new ways to support intuitive, fluid and responsible human interaction with AI including how to help people&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/beyond-accuracy-the-role-of-mental-models-in-human-ai-team-performance\/\">decide when to trust an AI or when to question it<\/a>, to set appropriate expectations about an AI system\u2019s&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/2858036.2858288?CFTOKEN=68095274&CFID=837081887\">capabilities<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/will-you-accept-an-imperfect-ai-exploring-designs-for-adjusting-end-user-expectations-of-ai-systems\/\">performance<\/a>, to support the safe&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/interrupted-by-my-car-implications-of-interruption-and-interleaving-research-for-automated-vehicles\/\">hand-off between people and AI-based systems<\/a>, and to enable people and AI systems to&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/situated-interaction\/\">interact and collaborate in physical space<\/a>.<\/p>\n\n\n\n<p>Finally, a human-centered approach to responsible AI requires understanding the unique challenges practitioners face building AI systems and then working to address those challenges. Microsoft researchers are therefore studying&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/data-scientists-software-teams-state-art-challenges\/\">data scientists<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/research-collection-responsible-ai\/-%09https:\/www.microsoft.com\/en-us\/research\/publication\/software-engineering-for-machine-learning-a-case-study\/\">machine learning software engineers<\/a>, and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/02\/Sketching-NLP.pdf\">interdisciplinary AI-UX teams<\/a>, and creating new tools and platforms to support&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/vida\/\">data analysis<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-accountable-ai-hybrid-human-machine-analyses-for-characterizing-system-failure\/\">characterizing and debugging AI failures<\/a>, and developing of human-centered AI technologies such as&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-multimodal-emotion-sensing-platform-for-building-emotion-aware-applications\/\">emotion-aware<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/platform-situated-intelligence\/\">physically situated<\/a>&nbsp;AI systems.<\/p>\n\n\n\n<h4 id=\"explore-more\" class=\"alignwide has-text-align-wide\">Explore more<\/h4>\n\n\n\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Tool<\/span>\n\t\t\t<a href=\"https:\/\/github.com\/microsoft\/HAXPlaybook\" data-bi-cN=\"HAX Playbook\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>HAX Playbook<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Podcast<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/adaptive-systems-machine-learning-and-collaborative-ai-with-dr-besmira-nushi\/\" data-bi-cN=\"Adaptive systems, machine learning and collaborative AI with Dr. Besmira Nushi\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Adaptive systems, machine learning and collaborative AI with Dr. Besmira Nushi<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Podcast<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/data-science-and-ml-for-human-well-being-with-jina-suh\/\" data-bi-cN=\"Data science and machine learning for human well-being with Jina Suh\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Data science and machine learning for human well-being with Jina Suh<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Podcast<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/responsible-ai-with-dr-saleema-amershi\/\" data-bi-cN=\"Responsible AI with Dr. Saleema Amershi, Co-chair of the Aether Human-AI Interaction and Collaboration Working Group\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Responsible AI with Dr. Saleema Amershi, Co-chair of the Aether Human-AI Interaction and Collaboration Working Group<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication\/Guidance<\/span>\n\t\t\t<a href=\"https:\/\/aka.ms\/aiguidelines\" data-bi-cN=\"Guidelines for Human-AI Interaction\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Guidelines for Human-AI Interaction<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">CHI 2019 Honorable Mention Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/managing-messes-in-computational-notebooks\/\" data-bi-cN=\"Managing Messes in Computational Notebooks\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Managing Messes in Computational Notebooks<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">CHI 2019 Best Paper Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/planning-for-natural-language-failures-with-the-ai-playbook\/\" data-bi-cN=\"Planning for Natural Language Failures with the AI Playbook\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Planning for Natural Language Failures with the AI Playbook<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/software-engineering-for-machine-learning-a-case-study\/\" data-bi-cN=\"Software Engineering for Machine Learning: A Case Study\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Software Engineering for Machine Learning: A Case Study<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">ICSE 2019 Best Paper Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-text-color has-teal-color has-css-opacity has-teal-background-color has-background is-style-dots\"\/>\n\n\n\n<h3 id=\"private-ai\">Private AI<\/h3>\n\n\n\n<p>Private AI is a Microsoft Research project to enable Privacy Preserving Machine Learning (PPML).&nbsp; The CryptoNets&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/nam06.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fwww.microsoft.com%2Fen-us%2Fresearch%2Fpublication%2Fcryptonets-applying-neural-networks-to-encrypted-data-with-high-throughput-and-accuracy%2F&data=02%7C01%7Cmacorwin%40microsoft.com%7C064418b306204fb01ea908d7d1b89452%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637208463040408366&sdata=4mse2O%2BZRAPzfHA2xovfbuzCc0WnnSpsY4wLAxpAk9s%3D&reserved=0\">paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;from ICML 2016 demonstrated that deep learning on encrypted data is feasible using a new technology called Homomorphic Encryption. &nbsp;Practical solutions and approaches for Homomorphic Encryption were pioneered by the Cryptography Group at Microsoft Research in this 2011&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/nam06.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fwww.microsoft.com%2Fen-us%2Fresearch%2Fwp-content%2Fuploads%2F2011%2F05%2Fccs2011_submission_412.pdf&data=02%7C01%7Cmacorwin%40microsoft.com%7C064418b306204fb01ea908d7d1b89452%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637208463040408366&sdata=mrIf0xJBOOxvXWr1sHB%2Fv%2FVreb9Dg13tVTNpd%2BWZVG8%3D&reserved=0\">paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;which showed a wide range of applications for providing security and privacy in the cloud for healthcare, genomics, and finance.<\/p>\n\n\n\n<p>Homomorphic Encryption allows for computation to be done on encrypted data, without requiring access to a secret decryption key. The results of the computation are encrypted and can be revealed only by the owner of the key. Among other things, this technique can help to preserve individual privacy and control over personal data.<\/p>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--right\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Tool<\/span>\n\t\t\t<a href=\"https:\/\/github.com\/Microsoft\/SEAL\" data-bi-cN=\"Microsoft SEAL\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Microsoft SEAL<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>Microsoft&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/nam06.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fwww.microsoft.com%2Fen-us%2Fresearch%2Fproject%2Fhomomorphic-encryption%2F&data=02%7C01%7Cmacorwin%40microsoft.com%7C064418b306204fb01ea908d7d1b89452%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637208463040418358&sdata=H%2BOd9VnKWrnE6CToo5Eh6Zaxue1ioNNMKI7nPIyh66c%3D&reserved=0\">researchers&nbsp;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>have been working to make Homomorphic Encryption simpler and more widely available, particularly through the open-source&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/nam06.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fwww.microsoft.com%2Fen-us%2Fresearch%2Fproject%2Fmicrosoft-seal%2F&data=02%7C01%7Cmacorwin%40microsoft.com%7C064418b306204fb01ea908d7d1b89452%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637208463040418358&sdata=qRpApDIOa5mri9xoXEqGoMkX%2FiIG%2BK8N99jX%2FcZSq5U%3D&reserved=0\">SEAL&nbsp;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>library. To learn more, listen to this&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/nam06.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fwww.microsoft.com%2Fen-us%2Fresearch%2Fblog%2Ftales-from-the-cryptography-lab-with-dr-kristin-lauter%2F&data=02%7C01%7Cmacorwin%40microsoft.com%7C064418b306204fb01ea908d7d1b89452%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637208463040428355&sdata=gdGawSMmCEphfG6HrFRbx3TbeAUUZknbWWX9kDdr5HY%3D&reserved=0\">podcast<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, take this&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/nam06.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fvimeo.com%2F400070576&data=02%7C01%7Cmacorwin%40microsoft.com%7C064418b306204fb01ea908d7d1b89452%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637208463040428355&sdata=FBMWM8sVZO%2Bor6T6SmAYPfOl4u8Y6sRxaM4naVycY4M%3D&reserved=0\">webinar<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;on Private AI from the National Academies, or this&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/nam06.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fnote.microsoft.com%2FMSR-Microsoft-SEAL-Webinar-Registration-Live.html&data=02%7C01%7Cmacorwin%40microsoft.com%7C064418b306204fb01ea908d7d1b89452%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637208463040438346&sdata=Dc51lus1u9loIhRCGWMyCgfisOwlvoiYlsILwOmeAME%3D&reserved=0\">webinar<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;from Microsoft Research on SEAL.<\/p>\n\n\n\n<h4 id=\"explore-more\" class=\"alignwide has-text-align-wide\">Explore more<\/h4>\n\n\n\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Podcast<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/tales-from-the-cryptography-lab-with-dr-kristin-lauter\/\" data-bi-cN=\"Takes from the Cryptography Lab with Dr. Kristin Lauter\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Takes from the Cryptography Lab with Dr. Kristin Lauter<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Project<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/homomorphic-encryption\/\" data-bi-cN=\"Homomorphic Encryption\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Homomorphic Encryption<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/can-homomorphic-encryption-be-practical\/\" data-bi-cN=\"Can Homomorphic Encryption be Practical?\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Can Homomorphic Encryption be Practical?<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cryptonets-applying-neural-networks-to-encrypted-data-with-high-throughput-and-accuracy\/\" data-bi-cN=\"CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Talk<\/span>\n\t\t\t<a href=\"https:\/\/www.youtube.com\/watch?v=1NwpCW-R3l4\" data-bi-cN=\"Private AI: Machine Learning on Encrypted Data\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Private AI: Machine Learning on Encrypted Data<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Webinar<\/span>\n\t\t\t<a href=\"https:\/\/note.microsoft.com\/MSR-Microsoft-SEAL-Webinar-Registration-Live.html\" data-bi-cN=\"Homomorphic Encryption with Microsoft SEAL\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Homomorphic Encryption with Microsoft SEAL<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Webinar<\/span>\n\t\t\t<a href=\"https:\/\/vimeo.com\/400070576\" data-bi-cN=\"Keeping the Internet Safe\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Keeping the Internet Safe<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">National Academy of Sciences Webinar<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-text-color has-teal-color has-css-opacity has-teal-background-color has-background is-style-dots\"\/>\n\n\n\n<h3 id=\"partnerships-and-support-for-student-work\">Partnerships and Support for Student Work<\/h3>\n\n\n\n<h4 id=\"working-with-academic-and-commercial-partners\">Working with Academic and Commercial Partners<\/h4>\n\n\n\n<p>Microsoft Research supports and works closely with&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/datasociety.net\/\" target=\"_blank\" rel=\"noopener noreferrer\">Data & Society<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, which is committed is committed to identifying thorny issues at the intersection of technology and society, providing and encouraging research that can ground informed, evidence-based public debates, and building a network of researchers and practitioners who can anticipate issues and offer insight and direction. Microsoft Research also supports the&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/ainowinstitute.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">AI Now Institute<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;at New York University, an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.<\/p>\n\n\n\n<p>Microsoft is also a member of the&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.partnershiponai.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">Partnership on AI<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>&nbsp;(PAI), a multi-stakeholder organization that brings together academics, researchers, civil society organizations, companies building and utilizing AI technology, and other groups working to better understand AI\u2019s impacts. Microsoft researchers are contributing to a number of PAI projects, including the ABOUT ML work referenced above.<\/p>\n\n\n\n<h4 id=\"supporting-student-work-on-responsible-ai\">Supporting Student Work on Responsible AI<\/h4>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--right\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Blog<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/2020-ada-lovelace-and-phd-fellowships-help-recipients-achieve-broad-research-and-educational-goals\/\" data-bi-cN=\"2020 Ada Lovelace and PhD Fellowships help recipients achieve broad research and educational goals\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>2020 Ada Lovelace and PhD Fellowships help recipients achieve broad research and educational goals<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>The&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/ada-lovelace-fellowship\/\">Ada Lovelace Fellowship<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/phd-fellowship\/\">PhD Fellowship<\/a>&nbsp;continue a Microsoft Research tradition of providing promising doctoral students in North America with funding to support their studies and research. Many of these fellowships\u2019 2020 recipients are doing work to advance the responsible and beneficial use of technology, including enhancing fairness in natural language processing, reducing bias, promoting social equality, and improving the mental, emotional, and social health of people with dementia.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-text-color has-teal-color has-css-opacity has-teal-background-color has-background is-style-dots\"\/>\n\n\n\n<h3 id=\"cited-publications\" class=\"alignwide has-text-align-wide\">Cited publications<\/h3>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-reductions-approach-to-fair-classification\/\" data-bi-cN=\"A Reductions Approach to Fair Classification\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A Reductions Approach to Fair Classification<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/co-designing-checklists-to-understand-organizational-challenges-and-opportunities-around-fairness-in-ai\/\" data-bi-cN=\"Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">CHI 2020 Best Paper Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/fair-regression-quantitative-definitions-and-reduction-based-algorithms\/\" data-bi-cN=\"Fair Regression: Quantitative Definitions and Reduction-based Algorithms\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Fair Regression: Quantitative Definitions and Reduction-based Algorithms<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/improving-fairness-in-machine-learning-systems-what-do-industry-practitioners-need\/\" data-bi-cN=\"Improving fairness in machine learning systems: What do industry practitioners need?\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Improving fairness in machine learning systems: What do industry practitioners need?<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/quantifying-reducing-stereotypes-word-embeddings\/\" data-bi-cN=\"Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/stretching-human-laws-to-apply-to-machines-the-dangers-of-a-colorblind-computer\/\" data-bi-cN=\"Stretching Human Laws to Apply to Machines: The Dangers of a \u2018Colorblind\u2019 Computer\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Stretching Human Laws to Apply to Machines: The Dangers of a \u2018Colorblind\u2019 Computer<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/toward-fairness-in-ai-for-people-with-disabilities-a-research-roadmap\/\" data-bi-cN=\"Toward Fairness in AI for People with Disabilities: A Research Roadmap\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Toward Fairness in AI for People with Disabilities: A Research Roadmap<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/whats-in-a-name-reducing-bias-in-bios-without-access-to-protected-attributes\/\" data-bi-cN=\"What\u2019s in a Name? Reducing Bias in Bios without Access to Protected Attributes\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>What\u2019s in a Name? Reducing Bias in Bios without Access to Protected Attributes<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">NAACL 2019 Best Thematic Paper Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/accurate-intelligible-models-pairwise-interactions\/\" data-bi-cN=\"Accurate Intelligible Models with Pairwise Interactions\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Accurate Intelligible Models with Pairwise Interactions<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/datasheets-for-datasets\/\" data-bi-cN=\"Datasheets for Datasets\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Datasheets for Datasets<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/explaining-machine-learning-classifiers-through-diverse-counterfactual-examples\/\" data-bi-cN=\"Explaining Machine Learning Classifiers through Diverse Counterfactual Examples\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Explaining Machine Learning Classifiers through Diverse Counterfactual Examples<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/gamut-a-design-probe-to-understand-howdata-scientists-understand-machine-learning-models\/\" data-bi-cN=\"Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/interpreting-interpretability-understanding-data-scientists-use-of-interpretability-tools-for-machine-learning\/\" data-bi-cN=\"Interpreting Interpretability: Understanding Data Scientists\u2019 Use of Interpretability Tools for Machine Learning\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Interpreting Interpretability: Understanding Data Scientists\u2019 Use of Interpretability Tools for Machine Learning<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">CHI 2020 Honorable Mention Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/interpretml-a-unified-framework-for-machine-learning-interpretability\/\" data-bi-cN=\"InterpretML: A Unified Framework for Machine Learning Interpretability\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>InterpretML: A Unified Framework for Machine Learning Interpretability<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/manipulating-and-measuring-model-interpretability\/\" data-bi-cN=\"Manipulating and Measuring Model Interpretability\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Manipulating and Measuring Model Interpretability<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/understanding-the-effect-of-accuracy-on-trust-in-machine-learning-models\/\" data-bi-cN=\"Understanding the Effect of Trust in Machine Learning Models\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Understanding the Effect of Trust in Machine Learning Models<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">CHI 2019 Honorable Mention Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/creating-better-ai-partners-a-case-for-backward-compatibility\/\" data-bi-cN=\"A Case for Backward Compatibility for Human-AI Teams\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A Case for Backward Compatibility for Human-AI Teams<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/discovering-blind-spots-in-reinforcement-learning\/\" data-bi-cN=\"Discovering Blind Spots in Reinforcement Learning\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Discovering Blind Spots in Reinforcement Learning<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/identifying-unknown-unknowns-open-world-representations-policies-guided-exploration\/\" data-bi-cN=\"Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-accountable-ai-hybrid-human-machine-analyses-for-characterizing-system-failure\/\" data-bi-cN=\"Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/aka.ms\/aiguidelines\" data-bi-cN=\"Guidelines for Human-AI Interaction\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Guidelines for Human-AI Interaction<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">CHI 2019 Honorable Mention Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/managing-messes-in-computational-notebooks\/\" data-bi-cN=\"Managing Messes in Computational Notebooks\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Managing Messes in Computational Notebooks<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">CHI 2019 Best Paper Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/02\/Sketching-NLP.pdf\" data-bi-cN=\"Sketching NLP: A Case Study of Exploring the Right Things to Design with Language Intelligence\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Sketching NLP: A Case Study of Exploring the Right Things to Design with Language Intelligence<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">CHI 2019 Honorable Mention Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/software-engineering-for-machine-learning-a-case-study\/\" data-bi-cN=\"Software Engineering for Machine Learning: A Case Study\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Software Engineering for Machine Learning: A Case Study<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">ICSE 2019 Best Paper Award<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/can-homomorphic-encryption-be-practical\/\" data-bi-cN=\"Can Homomorphic Encryption be Practical?\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Can Homomorphic Encryption be Practical?<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cryptonets-applying-neural-networks-to-encrypted-data-with-high-throughput-and-accuracy\/\" data-bi-cN=\"CryptoNets Applying Neural Networks to Encrypted Data with High Throughput and Accuracy\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>CryptoNets Applying Neural Networks to Encrypted Data with High Throughput and Accuracy<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n","protected":false},"excerpt":{"rendered":"<p>Microsoft is committed to the advancement and use of AI grounded in principles that put people first and benefit society. We are putting these principles into practice throughout the company by embracing diverse perspectives, fostering continuous learning, and proactively responding as AI technology evolves.<\/p>\n","protected":false},"author":38127,"featured_media":650286,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[],"msr_hide_image_in_river":0,"footnotes":""},"categories":[194467,241770,243747,194455,243753,244017,243750],"tags":[243975],"research-area":[13556,13554,13558],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[244002],"msr-podcast-series":[],"class_list":["post-680358","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artifical-intelligence","category-artificial-intelligence","category-human-computing-interaction","category-machine-learning","category-privacy-and-cryptography","category-research-collection","category-security-2","tag-responsible-ai","msr-research-area-artificial-intelligence","msr-research-area-human-computer-interaction","msr-research-area-security-privacy-cryptography","msr-locale-en_us","msr-promo-type-academic-program"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[214465],"related-groups":[144633,144672,283244,372368,396845,550641,578422,781564],"related-projects":[813025],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"<img width=\"655\" height=\"280\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/04\/Collection_ResponsibleAI.jpg\" class=\"img-object-cover\" alt=\"Group of men and women walking down a lab hallway\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/04\/Collection_ResponsibleAI.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/04\/Collection_ResponsibleAI-300x128.jpg 300w\" sizes=\"auto, (max-width: 655px) 100vw, 655px\" \/>","byline":"","formattedDate":"April 13, 2020","formattedExcerpt":"Microsoft is committed to the advancement and use of AI grounded in principles that put people first and benefit society. We are putting these principles into practice throughout the company by embracing diverse perspectives, fostering continuous learning, and proactively responding as AI technology evolves.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/680358","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/38127"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=680358"}],"version-history":[{"count":18,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/680358\/revisions"}],"predecessor-version":[{"id":918603,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/680358\/revisions\/918603"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/650286"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=680358"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=680358"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=680358"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=680358"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=680358"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=680358"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=680358"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=680358"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=680358"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=680358"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=680358"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}