FATE: Fairness, Accountability, Transparency, and Ethics in AI

FATE: Fairness, Accountability, Transparency, and Ethics in AI

Publications

Overview

FATE group photoWe study the complex social implications of AI, machine learning, data science, large-scale experimentation, and increasing automation. Our aim is to facilitate computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history, and science and technology studies.

The types of questions we are exploring are:

  • How can AI assist users and offer enhanced insights, while avoiding exposing them to discrimination in health, housing, law enforcement, and employment?
  • How can we balance the need for efficiency and exploration with fairness and sensitivity to users?
  • As our world moves toward relying on intelligent agents, how can we create a system that individuals and communities can trust?

We are committed to working closely with AI research institutions—including those highlighted below—to address the need for transparency, accountability, and fairness in AI and machine learning systems. We also publish our research in a variety of disciplines, including machine learning, information retrieval, systems, sociology, political science, and science and technology studies.

Data & Society
Data & Society is an independent, nonprofit 501(c)(3) research institute committed identifying thorny issues at the intersection of technology and society, providing and encouraging research that can ground informed, evidence-based public debates, and building a network of researchers and practitioners who can anticipate issues and offer insight and direction.

AI Now Institute
The AI Now Institute at New York University is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.

Partnership on AI
Partnership on AI studies and formulates best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

Social Media Collective
The Social Media Collective (SMC) is a network of social science and humanistic researchers, part of the Microsoft Research labs in New England and New York. It includes full-time researchers, postdocs, interns, and visitors.

People

Blogs & webinars

Minimizing gaps in information access on social networks while maximizing the spread

Lots of important information, including job openings and other kinds of advertising, are often transmitted by word-of-mouth in online settings. For example, social networks like LinkedIn are increasingly used as a way of spreading information about job opportunities, which can greatly affect people’s career development…

Microsoft Research Blog | May 2019

Machine Learning and Fairness Webinar

Webinar promo - photo of Jenn Wortman Vaughan and Hanna Wallach

In this webinar led by Microsoft researchers Jenn Wortman Vaughan and Hanna Wallach, 15-year veterans of the machine learning field, you’ll learn how to make detecting and mitigating biases a first-order priority in your development and deployment of ML systems…

Microsoft Research Webinar Series | April 2019

Keeping an Eye on AI with Dr. Kate Crawford

Kate Crawford

Episode 14, February 28, 2018 – Today, Dr. Crawford talks about both the promises and the problems of AI; why— when it comes to data – bigger isn’t necessarily better; and how – even in an era of increasingly complex technological advances – we can adopt AI design principles that empower people to shape their technical tools in ways they’d like to use them most.

Microsoft Research Podcast | February 2018

Incubations

FATE - datasheets icon

Datasheets for datasets

Goal: To increase transparency and accountability within the machine learning community by mitigating unwanted biases in machine learning systems, facilitating greater reproducibility of machine learning results, and helping researchers and practitioners select more appropriate datasets for their chosen tasks.

Approach: We are documenting the motivation, composition, collection process, recommended uses, and so on for the datasets bypiloting datasheets with teams across Microsoft, and also by partnering with PAI (Partnership on AI) to produce an Industry Best practices paper with contributions from all partners. This effort will be called “Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles” (ABOUT ML). When organizations implement ABOUT ML best practices, the output will include documentation for use by internal ML decisionmakers, external documentation similar to food nutrition labels or drug labels on pharmaceuticals, and a process that organization followed to improve the responsible development and fielding of AI technologies.

Related: Datasheets for Datasets



FATE - fairness icon

Fairness in ML Systems

Goal: Base the design and development of algorithmic tools to assess and mitigate potential unfairness of machine learning (ML) systems on real-world needs.

Approach: Fairlearn is one of the new approaches to measuring and mitigating biases in systems that make decisions about allocating resources, opportunities, or information.

Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conducted the first systematic investigation of commercial product teams’ challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners’ needs.

Related:


FATE - autocompletes icon

Problematic Autocompletes

Goal: To understand what factors lead to problematic auto-suggestions (through user studies and experiments, audits, and descriptive studies) and define offline metrics/processes to ensure suggestions are not biased or otherwise problematic.

Approach: A rich taxonomy of problematic suggestions spanning multiple dimensions (content, targets, structures, harms and implications) has been developed which is influencing various product teams to take steps to mitigate problematic autocompletes.


FATE - flow icon

Algorithmic Greenlining

Goal: Our goal is to introduce our algorithmic framework technique to domain experts who can determine appropriate metrics for their specific applications while also providing provable guarantees for estimating the diversity of any criterion given historical data. Thus, providing the ability to use diversity estimates instead of true diversity scores.

Approach: We call our approach algorithmic greenlining as it is the antithesis of redlining. Greenlining is the conscious effort to promote minority interests and representation by using unbiased criteria in day-to-day settings. In this work,we provide individuals with computational tools to help them reduce the intentional or unintentional bias encoded by their criteria.

Related: Algorithmic greenlining: An approach to increase diversity∗