17 March 2017 Visual attention in egocentric field-of-view using RGB-D data
Author Affiliations +
Proceedings Volume 10341, Ninth International Conference on Machine Vision (ICMV 2016); 103410T (2017) https://doi.org/10.1117/12.2268617
Event: Ninth International Conference on Machine Vision, 2016, Nice, France
Most of the existing solutions predicting visual attention focus solely on referenced 2D images and disregard any depth information. This aspect has always represented a weak point since the depth is an inseparable part of the biological vision. This paper presents a novel method of saliency map generation based on results of our experiments with egocentric visual attention and investigation of its correlation with perceived depth. We propose a model to predict the attention using superpixel representation with an assumption that contrast objects are usually salient and have a sparser spatial distribution of superpixels than their background. To incorporate depth information into this model, we propose three different depth techniques. The evaluation is done on our new RGB-D dataset created by SMI eye-tracker glasses and KinectV2 device.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Veronika Olesova, Veronika Olesova, Wanda Benesova, Wanda Benesova, Patrik Polatsek, Patrik Polatsek, } "Visual attention in egocentric field-of-view using RGB-D data", Proc. SPIE 10341, Ninth International Conference on Machine Vision (ICMV 2016), 103410T (17 March 2017); doi: 10.1117/12.2268617; https://doi.org/10.1117/12.2268617


3D recovery of human gaze in natural environments
Proceedings of SPIE (February 04 2013)
BTF Potts compound texture model
Proceedings of SPIE (March 13 2015)
Data submission of 3D image sets to a bio molecular...
Proceedings of SPIE (December 15 2003)

Back to Top