HRTF Personalization using Anthropometric Features


September 27, 2013


Piotr Bilinski




Head-related transfer functions (HRTFs) represent the acoustical transfer function between a sound source and the entrance of the blocked ear canal. Imposing HRTFs onto a non-spatial audio signal and playing back the result over headphones evokes the perception of a virtual 3D auditory space. The commercial breakthrough of HRTF-based audio has been hindered by the fact that HRTFs are highly individual and using HRTFs other than the user’s own can significantly impair the result. Since the measurement of HRTFs is an expensive process, automatic selection or synthesis of HRTFs is desirable. We therefore propose methods for HRTF selection and synthesis based on Decision Trees, Sparse Representation and Neural Networks. These methods are based on the mapping of people’s anthropometric features and other personal information to the acoustic HRTFs. The results are analyzed both objectively using measures such as spectral distortion as well as in a listening experiment.


Piotr Bilinski

Piotr Bilinski is a Ph.D. candidate in Spatio-Temporal Activity Recognition Systems (STARS) research group at INRIA, France. His research interests are Computer Vision, Action Recognition, Object Detection and Tracking, Pattern Recognition, Machine Learning, and Signal Processing. He is an intern in Microsoft Research this summer, working on HRTF Personalization using Anthropometric Features.