Translator Disclaimer
2 February 2011 Learned saliency transformations for gaze guidance
Author Affiliations +
Abstract
The saliency of an image or video region indicates how likely it is that the viewer of the image or video fixates that region due to its conspicuity. An intriguing question is how we can change the video region to make it more or less salient. Here, we address this problem by using a machine learning framework to learn from a large set of eye movements collected on real-world dynamic scenes how to alter the saliency level of the video locally. We derive saliency transformation rules by performing spatio-temporal contrast manipulations (on a spatio-temporal Laplacian pyramid) on the particular video region. Our goal is to improve visual communication by designing gaze-contingent interactive displays that change, in real time, the saliency distribution of the scene.
© (2011) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Eleonora Vig, Michael Dorr, and Erhardt Barth "Learned saliency transformations for gaze guidance", Proc. SPIE 7865, Human Vision and Electronic Imaging XVI, 78650W (2 February 2011); https://doi.org/10.1117/12.876377
PROCEEDINGS
11 PAGES


SHARE
Advertisement
Advertisement
Back to Top