FATE: Fairness, Accountability, Transparency, and Ethics in AI

FATE: Fairness, Accountability, Transparency, and Ethics in AI





FATE and friends group photo

FATE and friends

We study the complex social implications of AI, machine learning, data science, large-scale experimentation, and increasing automation. Our aim is to facilitate computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history, and science and technology studies.

The types of questions we are exploring are:

  • How can AI assist users and offer enhanced insights, while avoiding exposing them to discrimination in health, housing, law enforcement, and employment?
  • How can we balance the need for efficiency and exploration with fairness and sensitivity to users?
  • As our world moves toward relying on intelligent agents, how can we create a system that individuals and communities can trust?

We are committed to working closely with AI research institutions—including those highlighted below—to address the need for transparency, accountability, and fairness in AI and machine learning systems. We also publish our research in a variety of disciplines, including machine learning, information retrieval, systems, sociology, political science, and science and technology studies.

Data & Society
Data & Society is an independent, nonprofit 501(c)(3) research institute committed identifying thorny issues at the intersection of technology and society, providing and encouraging research that can ground informed, evidence-based public debates, and building a network of researchers and practitioners who can anticipate issues and offer insight and direction.

AI Now Institute
The AI Now Institute at New York University is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.

Partnership on AI
Partnership on AI studies and formulates best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

Social Media Collective
The Social Media Collective (SMC) is a network of social science and humanistic researchers, part of the Microsoft Research labs in New England and New York. It includes full-time researchers, postdocs, interns, and visitors.


Blogs & webinars

Minimizing gaps in information access on social networks while maximizing the spread

Lots of important information, including job openings and other kinds of advertising, are often transmitted by word-of-mouth in online settings. For example, social networks like LinkedIn are increasingly used as a way of spreading information about job opportunities, which can greatly affect people’s career development…

Microsoft Research Blog | May 2019

Machine Learning and Fairness Webinar

Webinar promo - photo of Jenn Wortman Vaughan and Hanna WallachIn this webinar led by Microsoft researchers Jenn Wortman Vaughan and Hanna Wallach, 15-year veterans of the machine learning field, you’ll learn how to make detecting and mitigating biases a first-order priority in your development and deployment of ML systems…

Microsoft Research Webinar Series | April 2019

Machine Learning for fair decisions

Over the past decade, machine learning systems have begun to play a key role in many high-stakes decisions: Who is interviewed for a job? Who is approved for a bank loan? Who receives parole? Who is admitted to a school?

Microsoft Research Blog | July 2018

Keeping an Eye on AI with Dr. Kate Crawford

Kate CrawfordEpisode 14, February 28, 2018 – Today, Dr. Crawford talks about both the promises and the problems of AI; why— when it comes to data – bigger isn’t necessarily better; and how – even in an era of increasingly complex technological advances – we can adopt AI design principles that empower people to shape their technical tools in ways they’d like to use them most.

Microsoft Research Podcast | February 2018

Addressing Fairness, Accountability, and Transparency in Machine Learning

Hanna WallachMachine learning and big data are certainly hot topics that emerged within the tech community in 2014. But what are the real-world implications for how we interpret what happens inside the data centers that churn through mountains of seemingly endless data?

Microsoft Research Blog | December 2014


FATE - datasheets icon

Datasheets for datasets

Goal: To increase transparency and accountability within the machine learning community by mitigating unwanted biases in machine learning systems, facilitating greater reproducibility of machine learning results, and helping researchers and practitioners select more appropriate datasets for their chosen tasks.

Approach: We are documenting the motivation, composition, collection process, recommended uses, and so on for the datasets bypiloting datasheets with teams across Microsoft, and also by partnering with PAI (Partnership on AI) to produce an Industry Best practices paper with contributions from all partners. This effort will be called “Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles” (ABOUT ML). When organizations implement ABOUT ML best practices, the output will include documentation for use by internal ML decisionmakers, external documentation similar to food nutrition labels or drug labels on pharmaceuticals, and a process that organization followed to improve the responsible development and fielding of AI technologies.

Related: Datasheets for Datasets

FATE - fairness icon

Fairness in ML Systems

Goal: To assess and mitigate potential unfairness of machine learning (ML) systems.

Approach: The fairlearn package presents a new approach to measuring and mitigating unfairness in systems that make predictions, serve users, or make decisions about allocating resources, opportunities, or information. Since there are many quantitative fairness metrics, which cannot be all simultaneously satisfied, we seek to enable humans to make trade-offs depending on the context. Note that fairness is fundamentally a sociotechnical challenge, so “fair” machine learning tools are not be-all-and-end-all solutions, and are only appropriate in particular, limited, circumstances.


FATE - flow icon

Algorithmic Greenlining

Goal: Our goal is to introduce our algorithmic framework technique to application developers or decision makers so that they can develop a selection criteria yielding high-quality and diverse results in contexts such as college admissions, hiring, and image search.

Approach: We call our approach algorithmic greenlining as it is the antithesis of redlining. Using an application-specific notion of substitutability, our algorithms suggest similar criteria with more diverse results, in the spirit of statistical or demographic parity. Take for example, a scenario of choosing a job candidate search criteria. There’s typically limited information about any candidate’s “true quality”. Employer’s intuition might suggest searching for “computer programmer” which yields high-quality candidates but might return few female candidates. The greenlining algorithm suggests alternative queries which are similar but more gender-diverse, such as “software developer” or “web developer.”

Related: Algorithmic greenlining: An approach to increase diversity∗