My research is at the intersection of psychology and computer science. I am designing hardware and algorithms for sensing human behavior at scale in order to build technologies that make life better. Applications of human sensing I am developing include: understanding mental health, improving online learning, designing new connected devices (IoT) and mixed reality experiences.
I am researcher at Microsoft Research in Redmond and a visiting scientist at Brigham and Women’s Hospital in Boston. Previously, I was Director of Research at MIT Media Lab Spin-Out Affectiva. I received his Ph.D. from the MIT Media Lab while working in the Affective Computing group and bachelor’s degree, with first-class honors, and master’s degree in engineering from the University of Cambridge.
In this webinar, Microsoft Principal Researcher Daniel McDuff and University of Washington PhD student Xin Liu will present an overview of computer vision methods that leverage ordinary webcams to measure physiological signals (for example, peripheral blood flow, heart rate, respiration, and blood oxygenation) without contact with the body.
Episode 17, March 28, 2018- One of the most intriguing areas of machine learning research is affective computing, where scientists are working to bridge the gap between human emotions and computers. It is here, at the intersection of psychology and computer science, that we find Dr. Daniel McDuff, who has been designing systems, from hardware to algorithms, that can sense human behavior and respond to human emotions.
Today, Dr. McDuff talks about why we need computers to understand us, outlines the pros and cons of designing emotionally sentient agents, explains the technology behind CardioLens, a pair of augmented reality glasses that can take your heartrate by looking at your face, and addresses the challenges of maintaining trust and privacy when we’re surrounded by devices that want to know not just what we’re doing, but how we’re feeling.