Paper
2 February 2011 Learned saliency transformations for gaze guidance
Eleonora Vig, Michael Dorr, Erhardt Barth
Author Affiliations +
Proceedings Volume 7865, Human Vision and Electronic Imaging XVI; 78650W (2011) https://doi.org/10.1117/12.876377
Event: IS&T/SPIE Electronic Imaging, 2011, San Francisco Airport, California, United States
Abstract
The saliency of an image or video region indicates how likely it is that the viewer of the image or video fixates that region due to its conspicuity. An intriguing question is how we can change the video region to make it more or less salient. Here, we address this problem by using a machine learning framework to learn from a large set of eye movements collected on real-world dynamic scenes how to alter the saliency level of the video locally. We derive saliency transformation rules by performing spatio-temporal contrast manipulations (on a spatio-temporal Laplacian pyramid) on the particular video region. Our goal is to improve visual communication by designing gaze-contingent interactive displays that change, in real time, the saliency distribution of the scene.
© (2011) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Eleonora Vig, Michael Dorr, and Erhardt Barth "Learned saliency transformations for gaze guidance", Proc. SPIE 7865, Human Vision and Electronic Imaging XVI, 78650W (2 February 2011); https://doi.org/10.1117/12.876377
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Eye

Visualization

Eye models

Feature extraction

Machine learning

Data modeling

Back to Top