Abstract

Head motion determination is an important problem for many applications including face modeling and tracking. We present a new algorithm to compute the head motion between two views from the correspondences of five feature points (eye corners, mouth corners, and nose tip), and zero or more additional image point matches. The algorithm takes advantage of the physical properties of the feature points such as symmetry, and it significantly improves the robustness of head motion estimation. This is achieved by reducing the number of unknowns to estimate, thus increasing information redundancy. This idea can be easily extended to any number of feature point correspondences.