关于
I am a researcher at Microsoft Research Redmond Labs with the Real-world Evidence group, where I work on Health AI at scale.
Medicine today is still jarringly imprecise. Diagnostic errors cause hundreds of thousands of deaths and permanent disabilities each year, with cancer misdiagnosis among the leading contributors [1 (opens in new tab)]. At the same time, nearly half of U.S. physicians report burnout, worn down by the cognitive and administrative demands of modern care [2 (opens in new tab)]. Our therapeutic arsenal is also blunt: many drugs for diseases like cancer help only 20–30% of patients, leaving most with little benefit or harmful side effects [3 (opens in new tab), 4 (opens in new tab)]. Yet, there is hope: we are at a point in history where for the first time we have decades of digitized patient data – records of trajectories, treatments, imaging, reports, and outcomes – a resource that remains largely untapped. Imagine a system that could learn from all this data at scale: one that reduces diagnostic errors, eases clinician burden, and ensures the right therapy reaches the right patient. This is the overarching vision of my research.
My current focus is on
- Developing advanced multi-modal foundation models for radiology, pathology, oncology, from large-scale real-world data.
- Advancing machine learning methods for encoding clinically valuable data representations, such as medical records, imaging, and lab data.
- Developing virtual patient frameworks and virtual populations to discover biomarkers and new relationships at population scale.
- Pioneering new frontiers in medical AI including spatial medicine, which links tissue architecture, cell states, and molecular signals to enable deeper insights into disease and treatment.
Some of my notable works include UNeXt (opens in new tab) (a fast segmentation architecture), Medical Transformer (opens in new tab), DAE (opens in new tab) (a 3D self-supervised learning pipeline), and CLIP-goes-3D (opens in new tab). I have also contributed to medical vision–language models such as ChexAgent (opens in new tab) (for chest X-rays) and Merlin (opens in new tab) (for CT). For my full list of publications, visit my Google scholar (opens in new tab).
Previously, I was a postdoctoral researcher at Stanford University, working with Prof. Andrew Ng (opens in new tab) and Prof. Curt Langlotz (opens in new tab), working on several Health AI projects ranging from synthetic data generation to medical vision foundation models. I also led Stanford’s AI for Healthcare bootcamp (opens in new tab). I obtained my PhD, MS from Johns Hopkins University advised by Prof. Vishal M. Patel (opens in new tab). During my PhD, I worked on several topics like designing effective deep architectures, model adaptability to changing environments, multi-modal learning, and taming large models for computer vision and healthcare tasks.
I have also been recognized through awards like Young Scientist Impact Award (MICCAI Society) (opens in new tab), Amazon Research Fellowship (opens in new tab), NIH MICCAI Award (opens in new tab), DAAD-AI-Fellowship (opens in new tab), multiple Best Paper awards (ICRA 2022, CVIP 2019) (opens in new tab), Project Seed Grants (Stanford HAI) (opens in new tab), travel grants, etc.
My works have appeared at top venues for ML, CV, and Health AI like CVPR, ICLR, ECCV, ICRA, MICCAI, MIDL, IEEE Transactions, etc. I serve as an area chair at MICCAI and as a reviewer in many top journals and conferences. I am also invovled in organizing multiple workshops in Health AI at conferences like CVPR and MICCAI.