21 July 2017 Deep features for efficient multi-biometric recognition with face and ear images
Author Affiliations +
Proceedings Volume 10420, Ninth International Conference on Digital Image Processing (ICDIP 2017); 104200D (2017) https://doi.org/10.1117/12.2281694
Event: Ninth International Conference on Digital Image Processing (ICDIP 2017), 2017, Hong Kong, China
Abstract
Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ibrahim Omara, Ibrahim Omara, Gang Xiao, Gang Xiao, Moussa Amrani, Moussa Amrani, Zifei Yan, Zifei Yan, Wangmeng Zuo, Wangmeng Zuo, } "Deep features for efficient multi-biometric recognition with face and ear images ", Proc. SPIE 10420, Ninth International Conference on Digital Image Processing (ICDIP 2017), 104200D (21 July 2017); doi: 10.1117/12.2281694; https://doi.org/10.1117/12.2281694
PROCEEDINGS
6 PAGES


SHARE
Back to Top