Image-based object recognition is one of the quintessential problems for computer vision, and human faces are arguably the most important class of objects to recognize. Despite extensive studies and practices on face recognition in the past couple of decades, we in this talk contend that a critical piece of information has largely been over-looked, which holds the key for high-performance robust face recognition.
That is, to a large extent, object recognition, and particularly face recognition under varying illumination, can be cast as a sparse representation problem. Based on L1-minimization, we propose an extremely simple but effective algorithm for face recognition that significantly advances the state-of-the-art. Within this unified computational framework, we systematically address two fundamental issues in face recognition: the role of feature selection and the issue with occlusion.
Some of the new results and findings can be rather surprising, and even go against the conventional wisdom. For example, we will show that once sparsity is properly harnessed, the choice of features is no longer critical for recognition. Severely down-sampled or randomly projected face images perform almost equally well as conventional features such as Eigenfaces and Laplacianfaces. Furthermore, the performance of such a simple algorithm arguably surpasses the capabilities of human recognizing severely down-sampled or occluded images.
This is joint work with John Wright at UIUC and Allen Yang at UC Berkeley.