Translator Disclaimer
Paper
31 May 2013 Gaze estimation for off-angle iris recognition based on the biometric eye model
Author Affiliations +
Abstract
Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ORNL biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Mahmut Karakaya, Del Barstow, Hector Santos-Villalobos, Joseph Thompson, David Bolme, and Christopher Boehnen "Gaze estimation for off-angle iris recognition based on the biometric eye model", Proc. SPIE 8712, Biometric and Surveillance Technology for Human and Activity Identification X, 87120F (31 May 2013); https://doi.org/10.1117/12.2018614
PROCEEDINGS
9 PAGES


SHARE
Advertisement
Advertisement
RELATED CONTENT


Back to Top