Emotions are fundamental to human interactions and influence memory, decision-making, and well-being. As AI systems and intelligent agents become more advanced, there is increasing interest in applications that can sense and respond to emotional states. We are researching and developing systems that recognize, interpret, process and respond to human emotions for social good.
We are working on several projects to bring artificial emotional intelligence to Microsoft products:
- Building a large database of labeled naturalistic data. Having a large amount of labeled data is essential for training affective computing systems. While Microsoft has an unparalleled amount of data that could be mined for this purpose, what’s missing are efficient labeling methodologies that add human judgments.
- Designing new methods for sensing emotion signals. We are advancing the state of the art in remote physiological measurement using webcams and other cameras to capture physiological responses, facial expression, voice tone, and language.
- Advancing multimodal analysis. We are combining affect measurement (for example, computer vision analysis of facial expression and scenes) with language models to generate computational models of conversation that better reflect emotions.
- Prototyping emotionally adaptive systems. How should systems—from search interactions to productivity tools—respond to emotions? We prototype systems that respond in real time and perform user studies to inform the design of effective affective computing applications.
This software tool uses the sensors on a workstation to capture data about your emotional state and activities. This data is vital for building intelligent agents that are natural to engage with.
Emotional Conversation Agents
Conversational interfaces are becoming increasingly popular. Recent advances in speech recognition, generative dialogue models and speech synthesis have enabled practical applications of voice-based inputs.
Using ML to further understand and help users
Understanding the use of word embeddings from practitioners
Recent research has shown that word embeddings can exhibit unwanted associations that reflect human biases, including gender stereotypes and racial prejudices. In this project, our goal is to understand how word embedding models are used by practitioners and to learn how they reason about the potential effects of unwanted associations in their downstream AI applications. Through this work, we hope to identify opportunities for future research and develop guidelines and tools to increase awareness of such unwanted associations as well as support the practical uses of word embeddings.
Interpretability of machine learning models in clinical practice
Clinical decision support systems, from risk scores to diagnostic predictions, are an essential tool for augmenting healthcare providers. Machine learning has the potential to extract new and useful insights from enormous amount of data generated from the delivery of healthcare, and the use of machine learning models in decision support systems have been increasingly popular. Interpretability of these models has been suggested as an important criteria for such systems. In this project, our goal is to evaluate different interpretable machine learning models with physicians to understand the value and need for interpretability in clinical decision support systems.
Designing technology-enabled depression care for cancer patients
Cancer and depression are prevalent and disabling co-occurring problems. Although we know that the integration of psychosocial care into oncology care is important, depression is under-treated in cancer patients. Behavioral activation (BA) has been shown to be effective in treating depression and in cancer patients, but there are challenges and barriers due to the complexities of the cancer treatment from cognitive, medical, financial, and logistical perspectives. The goal of this project is to understand the challenges, barriers, and needs of the patients and their care team (oncologists, psychiatrists, social workers, and administrators) and to design, build, and test technologies that overcome these challenges for the effective delivery of psychosocial care to cancer patients.
Data-driven app design of skill-based psychotherapy: Pocket Skills case study
Dialectical Behavioral Therapy (DBT), a psychotherapy designed to help people with complex, difficult-to-treat disorders, aims to teach patients concrete coping skills to help them navigate negative events and emotions. Prior skill-based psychotherapy apps have been shown to reduce depression and anxiety, but the efficacy of the skills have not been quantitatively analyzed to provide customized set of tools that work for different individuals. In this project, we examined data from a month-long field study of Pocket Skills—an app designed to offer holistic support of DBT—and identified contributing factors to improvement, both overall and for different types of participants. The goal of this project is to produce data-driven design implications and build personalized, context-aware skill-based psychotherapy that is effective for skill practices as well as in-the-moment use in times of distress.
Focus Agent 2.0
Intelligent assistance for planning, scheduling and guiding the user through tasks, switching and healthy breaks.