23 April 2018 Hybrid generative–discriminative approach to age-invariant face recognition
Author Affiliations +
Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.
© 2018 SPIE and IS&T
Muhammad Sajid, Muhammad Sajid, Tamoor Shafique, Tamoor Shafique, "Hybrid generative–discriminative approach to age-invariant face recognition," Journal of Electronic Imaging 27(2), 023029 (23 April 2018). https://doi.org/10.1117/1.JEI.27.2.023029 . Submission: Received: 5 December 2017; Accepted: 21 March 2018
Received: 5 December 2017; Accepted: 21 March 2018; Published: 23 April 2018

Back to Top