Head-related transfer functions (HRTFs) represent the acoustical transfer function between a sound source and the entrance of the blocked ear canal. Imposing HRTFs onto a non-spatial audio signal and playing back the result over headphones evokes the perception of a virtual 3D auditory space. The commercial breakthrough of HRTF-based audio has been hindered by the fact that HRTFs are highly individual and using HRTFs other than the user’s own can significantly impair the result. Since the measurement of HRTFs is an expensive process, automatic selection or synthesis of HRTFs is desirable. We therefore propose methods for HRTF selection and synthesis based on Decision Trees, Sparse Representation and Neural Networks. These methods are based on the mapping of people’s anthropometric features and other personal information to the acoustic HRTFs. The results are analyzed both objectively using measures such as spectral distortion as well as in a listening experiment.