8 February 2017 TDSIFT: a new descriptor for 2D and 3D ear recognition
Author Affiliations +
Proceedings Volume 10225, Eighth International Conference on Graphic and Image Processing (ICGIP 2016); 102250C (2017) https://doi.org/10.1117/12.2266727
Event: Eighth International Conference on Graphic and Image Processing, 2016, Tokyo, Japan
Abstract
Descriptor is the key of any image-based recognition algorithm. For ear recognition, conventional descriptors are either based on 2D data or 3D data. 2D images provide rich texture information and human ear is a 3D surface that could offer shape information. It also inspires us that 2D data is more robust against occlusion while 3D data shows more robustness against illumination variation and pose variation. In this paper, we introduce a novel Texture and Depth Scale Invariant Feature Transform (TDSIFT) descriptor to encode 2D and 3D local features for ear recognition. Compared to the original Scale Invariant Feature Transform (SIFT) descriptor, the proposed TDSIFT shows its superiority by fusing 2D local information and 3D local information. Firstly, keypoints are detected and described on texture images. Then, 3D information of the keypoints located on the corresponding depth images is added to form the TDSIFT descriptor. Finally, a local feature based classification algorithm is adopted to identify ear samples by TDSIFT. Experimental results on a benchmark dataset demonstrate the feasibility and effectiveness of our proposed descriptor. The rank-1 recognition rate achieved on a gallery of 415 persons is 95.9% and the time involved in the computation is satisfactory compared to state-of-the-art methods.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Long Chen, Zhichun Mu, Bingfei Nan, Yi Zhang, Ruyin Yang, "TDSIFT: a new descriptor for 2D and 3D ear recognition", Proc. SPIE 10225, Eighth International Conference on Graphic and Image Processing (ICGIP 2016), 102250C (8 February 2017); doi: 10.1117/12.2266727; https://doi.org/10.1117/12.2266727
PROCEEDINGS
5 PAGES


SHARE
Back to Top