Portrait of Timnit Gebru

Timnit Gebru

Postdoctoral Researcher

About

Timnit Gebru works in the Fairness Accountability Transparency and Ethics (FATE) group at the New York Lab. Prior to joining Microsoft Research, she was a PhD student in the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. The Economist and others have recently covered part of this work. She iscurrently studying how to take dataset bias into account while designing machine learning algorithms, and the ethical considerations underlying any data mining project. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the impact of racial bias in the data.

Publications

Other

Invited talks

Predicting demographics using 50 million images

CVPR LSVisCom 2015 | AI with the Best 2016 | AI in FinTech Forum 2017


Addis Coder: Algorithms and programming for high schoolers

This one-month summer class for high school students in Ethiopia, organized by Jelani Nelson, included 85 students from across the country, representing many religions and at least 10 languages. These diverse students, representing different income levels, both urban and rural communities, a 50/50 female/male ratio, and people with various levels of computer experience, all ended up learning the basics of recursion, dynamic programming, graphs, and more.

Website


Black in AI

Black in AI is an initiative designed to increase the presence of black people in the field of AI— a group currently woefully underrepresented. Its goals are two-fold: to ensure that people who identify as black feel welcomed in this industry, and to reduce bias in data used in machine learning. Because AI systems are trained by humans, they end up reflecting the biases and perspectives of their designers, conscious or not. We need to expand the perspectives of those training systems to eliminate racial bias, as reflected in such notorious examples as Google’s photo-recognition feature that wasn’t trained on enough black faces, or the AI system designed to predict criminal behavior that demonstrated a racial bias in its predictions. Black in AI’s founding members include AI researchers from Cornell, UC Berkeley, Facebook, University of Illinois, and Microsoft Research. Our first workshop will be held at NIPS 2017.

Website


More mentorship

Timnit is involved in various outreach/mentorship activities including:

EDGE (2014-2016)

EJHS Scholars Program (2017)

SAILORS (2015)