About
Hanna Wallach is a partner research manager at Microsoft Research New York City. Her research focuses on issues of fairness, accountability, transparency, and ethics as they relate to AI and machine learning. She collaborates with researchers from machine learning, natural language processing, human–computer interaction, and science and technology studies, as well as lawyers and policy makers; her research integrates both qualitative and quantitative perspectives. Previously, she developed machine learning and natural language processing methods for analyzing the structure, content, and dynamics of social processes. She collaborated with political scientists, sociologists, journalists, and others to understand how organizations function by analyzing publicly available interaction data, including email networks, document collections, press releases, meeting transcripts, and news articles. This work was supported by several NSF grants, an IARPA grant, and a grant from the OJJDP. The impact of Hanna’s work has been widely recognized. She has won best paper awards at AISTATS, CHI, and NAACL. In 2014, she was named one of Glamour magazine’s “35 Women Under 35 Who Are Changing the Tech Industry.” In 2016, she was named co-winner of the Borg Early Career Award. She served as the senior program chair for the NeurIPS 2018 conference and as the general chair for the NeurIPS 2019 conference. She currently serves on the NeurIPS Executive Board, the ICML Board, the FAccT Steering Committee, the WiML Senior Advisory Council, and the WiNLP Advisory Board. Hanna is committed to increasing diversity in computing and has worked for almost two decades to address the underrepresentation of women, in particular. To that end, she co-founded two projects—the first of their kind—to increase women’s involvement in free and open source software development: Debian Women and the GNOME Women’s Summer Outreach Program (now Outreachy). She also co-founded the WiML Workshop. Hanna holds a BA in computer science from the University of Cambridge, an MSc in cognitive science and machine learning from the University of Edinburgh, and a PhD in machine learning from the University of Cambridge.
Featured content

Fairness-related harms in AI systems: Examples, assessment, and mitigation webinar
In this webinar, Microsoft researchers Hanna Wallach and Miroslav Dudík will guide you through how AI systems can lead to a variety of fairness-related harms. They will then dive deeper into assessing and mitigating two specific types: allocation harms and quality-of-service harms. Allocation harms occur when AI systems allocate resources or opportunities in ways that can have significant negative impacts on people’s lives, often in high-stakes domains like education, employment, finance, and healthcare. Quality-of-service harms occur when AI systems, such as speech recognition or face detection systems, fail to provide a similar quality of service to different groups of people.