Guideline 6: Mitigate social biases

red header bar
6During Interaction

Mitigate social biases

Ensure the AI system’s language and behaviors do not reinforce undesirable and unfair stereotypes and biases.

Ensure the AI system’s language and behaviors don’t reinforce undesirable and unfair stereotypes and biases.

Societal biases work their ways into AI systems through multiple routes, including with how data is collected, how models are trained and tested, what assumptions are made about the people who will interact with or be impacted by the AI, and more. 

Use Guideline 6 as a reminder to plan for identifying, testing, and mitigating fairness harms.  

Tools such as Error Analysis and Fairlearn can help you identify and investigate performance disparities across groups.