Machine Understanding of Human Audio/ visual Affective Expressions
- Zhihong (John) Zeng | UIUC
Automatic human affect recognition research is to enable a computer to understand human affective behavior through sensors, with a goal of building human-centered and affect-support computing environment. This talk includes four parts.
The state of the art
Automated analysis of human affective behavior has attracted increasing attention from researchers in multiple disciplines. I will present a picture of this research through our recent comprehensive survey (accepted in PAMI) where we examine available approaches and databases, focusing on two trends in the field: analysis of spontaneous affective behavior and multimodal analysis of human affective behavior.
Multimodal fusion for audiovisual affect recognition
Many studies from both psychological and engineering research fields have demonstrated the importance of integration of information from multiple modalities to yield a coherent representation and inference of emotions. I will present our efforts toward multimodal affect recognition, focusing on Multi-stream Fused Hidden Markov Model which is to model multimodal fusion according to the maximum entropy principle and maximum mutual information criterion.
Multi-view facial expression recognition
The ability to handle multi-view facial expressions is important for computers to understand affective behavior under less constrained environment. I will present our investigation on using various feature extraction methods, classifiers, and the classifier fusion for multi-view facial expression recognition. In meantime, we explored an interesting question: whether non-frontal-view facial expression analysis can achieve the same as or better performance than the existing frontal-view facial expression studies.
Spontaneous affective expression analysis
I will present our initial efforts toward recognition of audiovisual affective expressions occurring in a psychological human interaction setting—the Adult Attachment Interview (AAI). This work scratches the surface of complexity of automatic spontaneous affect recognition, which will lead to the discussion of challenges in my talk.
Speaker Details
He got his B.E. in 1987 at Nanjing University of Aeronautics and Astronautics, M.E. in 1989 at Tsinghua University, and Ph.D at National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences. His PhD dissertation is real-time shape tracking under various environments. He had been worked as a research intern at Microsoft Research Asia from July 2002 to Nov. 2002, with Dr. Stan Li, on real-time multiple face detection and tracking. From Nov. 2002 to present, he has been working as Post-Doc at Human Computer Intelligent Interaction Research Initiative at Beckman Institute, UIUC, with Prof. Thomas Huang. Today he will present his interdisciplinary research, machine understanding of human audio/visual affective behavior which is funded by Beckman Postdoctoral Fellowship. His interests include HCI, human sensing and affective computing, computer vision, multimedia, and machine learning and pattern recognition.
-
-
Jeff Running
-
-
Watch Next
-
-
-
Accelerating MRI image reconstruction with Tyger
- Karen Easterbrook,
- Ilyana Rosenberg
-
-
-
-
From Microfarms to the Moon: A Teen Innovator’s Journey in Robotics
- Pranav Kumar Redlapalli
-
-
-