We propose a new algorithm for recognition of low-resolution faces for cases when multiple images of the same face are available at matching. Specifically, this is the case of multiple-frame, or video, face recognition, and recognition with multiple cameras.
Face recognition degrades when probe faces are of lower resolution than those available for training. There are two paradigms to alleviate this problem, but both have clear disadvantages. One is to use super-resolution algorithms to enhance the image, but as resolution decreases, super-resolution becomes more vulnerable to environmental variations, and it introduces distortions that affect recognition. On the other hand, it is possible to match in the low-resolution domain by downsampling the training set, but this is undesirable because features important for recognition depend on high frequency details that are erased by downsampling.
We recently proposed a new framework for recognition that is different from these two paradigms, and we have shown that recognition is considerably improved for still-image face recognition. In this work, we show that the proposed method can be generalized to use a stream of frames, and produce even better recognition performance. The new algorithm incorporates the underlying assumptions of super-resolution methods with subspace distance metrics used for classification, simultaneously. We extend our previous formulation to use multiple frames, and we show that it can also be generalized to use multiple image formation processes, modeling different cameras.
In this manuscript we consider two scenarios: face recognition with multiple frames, and the case of having a camera sensor network with dual cameras in a master-slave setting. Using the Multi-PIE database, our results show an increase of rank-1 recognition accuracy of about 6% for both cases compared to the single frame settings.