Tinker, Tailor, Configure, Customize: The Articulation Work of Customizing AI Fairness Checklists

Computer-Supported Cooperative Work |

Many responsible AI resources, such as toolkits, playbooks, and checklists, have been developed to support AI practitioners in identifying, measuring, and mitigating potential fairness-related harms. These resources are often designed to be general purpose in order to be applicable to a variety of use cases, domains, and deployment contexts. However, this can lead to decontextualization, where such resources lack the level of relevance or specificity needed to use them. To understand how AI practitioners might contextualize one such resource, an AI fairness checklist, for their particular use cases, domains, and deployment contexts, we conducted a retrospective contextual inquiry with 13 AI practitioners from seven organizations. We identify how contextualizing this checklist introduces new forms of work for AI practitioners and other stakeholders, as well as opening up new sites for negotiation and contestation of values in AI. We also identify how the contextualization process may help AI practitioners develop a shared language around AI fairness, and we identify tensions related to ownership over this process that suggest larger issues of accountability in responsible AI work.