Emotion Detection from Speech Signals

Date

August 8, 2013

Speaker

Kun Han

Affiliation

Ohio State University

Overview

Despite the great progress made in artificial intelligence, we are still far from having a natural interaction between man and machine, because the machine does not understand the emotional state of the speaker. Speech emotion detection has been drawing increasing attention, which aims to recognize emotion states from speech signal. The task of speech emotion recognition is very challenging, because it is not clear which speech features are most powerful in distinguishing between emotions. We utilize deep neural networks to detect emotion status from each speech segment in an utterance and then combine the segment-level results to form the final emotion recognition results. The system produces promising results on both clean speech and speech in gaming scenario.

Speakers

Kun Han

Kun Han is a Ph.D. candidate in Department of Computer Science and Engineering at The Ohio State University. He is interested in machine learning on speech processing. In particular, his research focuses on classification based speech separation. He is doing internship in Microsoft Research this summer, working on speech emotion detection.