Explainable AI for Science and Medicine

  • Scott Lundberg | University of Washington

Understanding why a machine learning model makes a certain prediction can be as crucial as the prediction’s accuracy in many applications. Here l will present a unified approach to explain the output of any machine learning model. It connects game theory with local explanations, uniting many previous methods. I will then focus specifically on tree-based models, such as random forests and gradient boosted trees, where we have developed the first polynomial time algorithm to exactly compute classic attribution values from game theory. Based on these methods we have created a new set of tools for understanding both global model structure and individual model predictions. These methods were motivated by specific problems we faced in medical machine learning, and they significantly improve doctor decision support during anesthesia. However, these explainable machine learning methods are not specific to medicine, and are now used by researchers across many domains. The associated open source software (http://github.com/slundberg/shap (opens in new tab)) supports many modern machine learning frameworks and is very widely used in industry (including at Microsoft).

[SLIDES]

Speaker Details

Scott Lundberg is a PhD candidate and NSF fellow in the Paul G. Allen School of Computer Science & Engineering at the University of Washington working with Su-In Lee. His research focuses on explainable machine learning and its application to problems in medicine and healthcare. This has led to the development of broadly applicable methods and tools for interpreting complex machine learning models. Before coming to UW he received his B.S and M.S from Colorado State University, and worked as a research scientist for five years with Numerica Corporation.