Fast and precise iris localization is a vital technique for face recognition, eye tracking, and gaze estimation. Low-resolution images bring about great difficulties for locating the iris precisely by traditional methods. In this paper, a fast and robust method to precisely detect the position and contour of the irises in low-resolution facial images is presented. A three-step coarse-to-fine strategy is employed. First, a gradient integral projection function is proposed to roughly detect the eye region, and the vertical integral projection function is adopted to select several possible vertical boundaries of the irises. Second, we have proposed a novel rectangular integro-variance operator to precisely locate both of the irises. Finally, the localization results are verified by two simple heuristic rules. A novel and more rigorous criterion is also proposed to evaluate the performance of the algorithm. Comparison experiments on images from the FERET and the Extended YaleB databases demonstrate that our method is more robust than traditional methods to scale variation, illumination changes, part occlusion, and limited changes of head poses in low-resolution facial images.
The accuracy of eye gaze estimation using image information is affected by several factors which include image resolution, anatomical structure of the eye, and posture changes. The irregular movements of the head and eye create issues that are currently being researched to enable better use of this key technology. In this paper, we describe an effective way of estimating eye gaze from the elliptical features of one iris under the conditions of not using an auxiliary light source, a head fixing equipment, or multiple cameras. First, we provide preliminary estimation of the gaze direction, and then we obtain the vectors which describe the translation and rotation of the eyeball, by applying a central projection method on the plane which passes through the line-of-sight. This helps us avoid the complex computations involved in previous methods. We also disambiguate the solution based on experimental findings. Second, error correction is conducted on a back propagation neural network trained by a sample collection of translation and rotation vectors. Extensive experimental studies are conducted to assess the efficiency, and robustness of our method. Results reveal that our method has a better performance compared to a typical previous method.