FATE: Fairness, Accountability, Transparency, and Ethics in AI

FATE: Fairness, Accountability, Transparency, and Ethics in AI


News & features

News & features

News & features


FATE and friends group photo

FATE and friends

We study the complex social implications of AI, machine learning, data science, large-scale experimentation, and increasing automation. Our aim is to facilitate computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history, and science and technology studies.

The types of questions we are exploring are:

  • How can AI assist users and offer enhanced insights, while avoiding exposing them to discrimination in health, housing, law enforcement, and employment?
  • How can we balance the need for efficiency and exploration with fairness and sensitivity to users?
  • As our world moves toward relying on intelligent agents, how can we create a system that individuals and communities can trust?

We are committed to working closely with AI research institutions—including those highlighted below—to address the need for transparency, accountability, and fairness in AI and machine learning systems. We also publish our research in a variety of disciplines, including machine learning, information retrieval, systems, sociology, political science, and science and technology studies.

Data & Society
Data & Society is an independent, nonprofit 501(c)(3) research institute committed identifying thorny issues at the intersection of technology and society, providing and encouraging research that can ground informed, evidence-based public debates, and building a network of researchers and practitioners who can anticipate issues and offer insight and direction.

AI Now Institute
The AI Now Institute at New York University is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.

Partnership on AI
Partnership on AI studies and formulates best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

Social Media Collective
The Social Media Collective (SMC) is a network of social science and humanistic researchers, part of the Microsoft Research labs in New England and New York. It includes full-time researchers, postdocs, interns, and visitors.



FATE - datasheets icon

Datasheets for datasets

Goal: To increase transparency and accountability within the machine learning community by mitigating unwanted biases in machine learning systems, facilitating greater reproducibility of machine learning results, and helping researchers and practitioners select more appropriate datasets for their chosen tasks.

Approach: We are documenting the motivation, composition, collection process, recommended uses, and so on for the datasets bypiloting datasheets with teams across Microsoft, and also by partnering with PAI (Partnership on AI) to produce an Industry Best practices paper with contributions from all partners. This effort will be called “Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles” (ABOUT ML). When organizations implement ABOUT ML best practices, the output will include documentation for use by internal ML decisionmakers, external documentation similar to food nutrition labels or drug labels on pharmaceuticals, and a process that organization followed to improve the responsible development and fielding of AI technologies.

Related: Datasheets for Datasets

FATE - fairness icon

Fairness in ML Systems

Goal: To assess and mitigate potential unfairness of machine learning (ML) systems.

Approach: The fairlearn package presents a new approach to measuring and mitigating unfairness in systems that make predictions, serve users, or make decisions about allocating resources, opportunities, or information. Since there are many quantitative fairness metrics, which cannot be all simultaneously satisfied, we seek to enable humans to make trade-offs depending on the context. Note that fairness is fundamentally a sociotechnical challenge, so “fair” machine learning tools are not be-all-and-end-all solutions, and are only appropriate in particular, limited, circumstances.


FATE - flow icon

Algorithmic Greenlining

Goal: Our goal is to introduce our algorithmic framework technique to application developers or decision makers so that they can develop a selection criteria yielding high-quality and diverse results in contexts such as college admissions, hiring, and image search.

Approach: We call our approach algorithmic greenlining as it is the antithesis of redlining. Using an application-specific notion of substitutability, our algorithms suggest similar criteria with more diverse results, in the spirit of statistical or demographic parity. Take for example, a scenario of choosing a job candidate search criteria. There’s typically limited information about any candidate’s “true quality”. Employer’s intuition might suggest searching for “computer programmer” which yields high-quality candidates but might return few female candidates. The greenlining algorithm suggests alternative queries which are similar but more gender-diverse, such as “software developer” or “web developer.”

Related: Algorithmic greenlining: An approach to increase diversity∗