4 March 2015 Gaze estimation using a hybrid appearance and motion descriptor
Author Affiliations +
Proceedings Volume 9443, Sixth International Conference on Graphic and Image Processing (ICGIP 2014); 944320 (2015) https://doi.org/10.1117/12.2178824
Event: Sixth International Conference on Graphic and Image Processing (ICGIP 2014), 2014, Beijing, China
Abstract
It is a challenging problem to realize a robust and low cost gaze estimation system. Existing appearance-based and feature-based methods both have achieved impressive progress in the past several years, while their improvements are still limited by feature representation. Therefore, in this paper, we propose a novel descriptor combining eye appearance and pupil center-cornea reflections (PCCR). The hybrid gaze descriptor represents eye structure from both feature level and topology level. At the feature level, a glints-centered appearance descriptor is presented to capture intensity and contour information of eye, and a polynomial representation of normalized PCCR vector is employed to capture motion information of eyeball. At the topology level, the partial least squares is applied for feature fusion and selection. At last, sparse representation based regression is employed to map the descriptor to the point-of-gaze (PoG). Experimental results show that the proposed method achieves high accuracy and has a good tolerance to head movements.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chunshui Xiong, Lei Huang, Changping Liu, "Gaze estimation using a hybrid appearance and motion descriptor", Proc. SPIE 9443, Sixth International Conference on Graphic and Image Processing (ICGIP 2014), 944320 (4 March 2015); doi: 10.1117/12.2178824; https://doi.org/10.1117/12.2178824
PROCEEDINGS
11 PAGES


SHARE
Back to Top