Translator Disclaimer
25 August 2004 Robust shift-invariant biometric identification from partial face images
Author Affiliations +
Face identification must address many practical challenges, including illumination variations (not seen during the testing phase), facial expressions, and pose variations. In most face recognition systems, the recognition process is performed after a face has been located and segmented in the input scene. This face detection and segmentation process however is prone to errors which can lead to partial faces being segmented (sometimes also due to occlusion) for the recognition process. There are also cases where the segmented face includes parts of the scene background as well as the face, affecting the recognition performance. In this paper, we address how these issues can be dealt efficiently with advanced correlation filter designs. We report extensive set of results on the CMU pose, illumination and expressions (PIE) dataset where training filters are designed in two experiments: (1) the training gallery has 3 images from extreme illumination (2) the training gallery has 3 images from near-frontal illumination. In the testing phase however, we test both filters with the whole illumination variations while simultaneously cropping the test images to various sizes. The results show that the advanced correlation filter designs perform very well even with partial face images of unseen illumination variations including reduced-complexity correlation filters such as the Quad-Phase Minimum Average Correlation Energy (QP-MACE) filter that requires only 2 bits/frequency storage.
© (2004) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Marios Savvides, B. V. K. Vijaya Kumar, and Pradeep K. Khosla "Robust shift-invariant biometric identification from partial face images", Proc. SPIE 5404, Biometric Technology for Human Identification, (25 August 2004);

Back to Top