Expressive Speech-Driven Facial Animation

Date

September 13, 2007

Overview

Computer graphics and animation is a very broad and multidisciplinary area of research. It serves as a visual tool for other areas of research, such as entertainment, scientific visualization and medical imaging. In addition, computer graphics also renovates its own research and technologies, inspiring artists and designers to create novel human-computer interactions.

While being interested in a wide range of problems involving graphics, my current focus is character animation, especially facial animation. In computer animation research, automatically synthesizing realistic facial animation remains a very challenging problem. Little has been done in the following two issues: real-time lip-syncing and modeling of expressive visual speech.

In this talk, I will present a data-driven approach for addressing these two issues. Firstly, I will present a greedy graph search algorithm for real-time lip-syncing. It yields vastly superior performance and allows real-time motion synthesis from a large database of motions. Secondly, I will describe a machine learning approach to model expressive visual behavior during speech.

Speakers

Yong Cao

Yong Cao received his Ph.D. degree in Computer Science at University of California, Los Angeles in 2005, where he worked character animation. During his Ph.D. study, he also worked at Institute for Creative Technologies, USC, to research and develops a 3D virtual tutor for an US Army training program. After receiving his degree, he worked for Electronic Arts, focusing on animation and rendering systems for the next generation game consoles. Dr. Cao is currently affiliated with Virginia Tech as an assistant professor at Computer Science Department.

‚Äč