Northwest Database Society Annual Meeting: Explainable Artificial Intelligence in Precision Medicine

Modern machine learning models can accurately predict patient progress and outcomes, however, they are not interpretable in the sense that they do not explain why selected features make sense or why a particular prediction result was made. I will talk about my group’s efforts to address these challenges by developing interpretable machine learning techniques for a wide range of applications, including treating cancer based on a patient’s own molecular profile, finding therapeutic targets for Alzheimer’s, predicting chronic kidney disease, preventing complications during surgery, enabling pre-hospital predictions for trauma patients, and improving our understanding of pan-cancer biology and genome biology. Among these, I will mainly focus on our work MERGE, which uses machine learning to enable targeted treatment of acute myeloid leukemia, published in Nature Communications last year, and our explainable artificial intelligence system, Prescience, for preventing hypoxemia in patients under anesthesia, recently featured on the cover of the most recent issue of Nature Biomedical Engineering.


Johannes Gehrke, Su-In Lee
Microsoft Research, Paul G. Allen School of Computer Science & Engineering

Watch Next