Compared to the good performance that can be achieved by many 2D visual attention models, predicting salient regions of a 3D scene is still challenging. An efficient way to achieve this can be to exploit existing models designed for 2D content. However, the visual conflicts caused by binocular disparity and changes of viewing behavior in 3D viewing need to be dealt with. To cope with these, the present paper proposes a simple framework for extending
2D attention models for 3D images, well as evaluates center-bias in 3D-viewing condition. To validate the results, a database is created, which contains eye-movements of 35 subjects recorded during free viewing of eighteen 3D images and their corresponding 2D version. Fixation density maps indicate a weaker center-bias in the viewing of 3D images. Moreover, objective metric results demonstrate the efficiency of the proposed model and a large added value of center-bias when it is taken into account in computational modeling of 3D visual attention.
The influence of a monocular depth cue, blur, on the apparent depth of stereoscopic scenes will be studied in this paper.
When 3D images are shown on a planar stereoscopic display, binocular disparity becomes a pre-eminent depth cue. But
it induces simultaneously the conflict between accommodation and vergence, which is often considered as a main reason
for visual discomfort. If we limit this visual discomfort by decreasing the disparity, the apparent depth also decreases.
We propose to decrease the (binocular) disparity of 3D presentations, and to reinforce (monocular) cues to compensate
the loss of perceived depth and keep an unaltered apparent depth. We conducted a subjective experiment using a twoalternative
forced choice task. Observers were required to identify the larger perceived depth in a pair of 3D images
with/without blur. By fitting the result to a psychometric function, we obtained points of subjective equality in terms of
disparity. We found that when blur is added to the background of the image, the viewer can perceive larger depth
comparing to the images without any blur in the background. The increase of perceived depth can be considered as a
function of the relative distance between the foreground and background, while it is insensitive to the distance between
the viewer and the depth plane at which the blur is added.
This paper presents the results of two psychophysical experiments and an associated computational analysis
designed to quantify the relationship between visual salience and visual importance. In the first experiment,
importance maps were collected by asking human subjects to rate the relative visual importance of each object
within a database of hand-segmented images. In the second experiment, experimental saliency maps were
computed from visual gaze patterns measured for these same images by using an eye-tracker and task-free
viewing. By comparing the importance maps with the saliency maps, we found that the maps are related, but
perhaps less than one might expect. When coupled with the segmentation information, the saliency maps were
shown to be effective at predicting the main subjects. However, the saliency maps were less effective at predicting
the objects of secondary importance and the unimportant objects. We also found that the vast majority of early
gaze position samples (0-2000 ms) were made on the main subjects, suggesting that a possible strategy of early
visual coding might be to quickly locate the main subject(s) in the scene.