While binocular viewing of 2D pictures generates an impression of 3D objects and space, viewing a picture monocularly through an aperture produces a more compelling impression of depth and the feeling that the objects are “out there”, almost touchable. Here, we asked observers to actually reach into pictorial space under both binocular- and monocular-aperture viewing. Images of natural scenes were presented at different physical distances via a mirror-system and their retinal size was kept constant. Targets that observers had to reach for in physical space were marked on the image plane, but at different pictorial depths. We measured the 3D position of the index finger at the end of each reach-to-point movement.
Observers found the task intuitive. Reaching responses varied as a function of both pictorial depth and physical distance. Under binocular viewing, responses were mainly modulated by the different physical distances. Instead, under monocular viewing, responses were modulated by the different pictorial depths. Importantly, individual variations over time were minor, that is, observers conformed to a consistent pictorial space. Monocular viewing of 2D pictures thus produces a compelling experience of an immersive space and tangible solid objects that can be easily explored through motor actions.
We present a framework for the study of active vision, i.e., the functioning of the visual system during actively
self-generated body movements. In laboratory settings, human vision is usually studied with a static observer
looking at static or, at best, dynamic stimuli. In the real world, however, humans constantly move within dynamic
environments. The resulting visual inputs are thus an intertwined mixture of self- and externally-generated
movements. To fill this gap, we developed a virtual environment integrated with a head-tracking system in which
the influence of self- and externally-generated movements can be manipulated independently. As a proof of
principle, we studied perceptual stationarity of the visual world during lateral translation or rotation of the head.
The movement of the visual stimulus was thus parametrically tethered to self-generated movements. We found
that estimates of object stationarity were less biased and more precise during head rotation than translation.
In both cases the visual stimulus had to partially follow the head movement to be perceived as immobile. We
discuss a range of possibilities for our setup among which the study of shape perception in active and passive
conditions, where the same optic flow is replayed to stationary observers.
In this study we demonstrate that touch decreases the ambiguity in a visual image. It has been previously
found that visual perception of three-dimensional shape is subject to certain variations. These variations can
be described by the affine transformation. While the visual system thus seems unable to capture the Euclidean
structure of a shape, touch could potentially be a useful source to disambiguate the image. Participants performed
a so-called 'attitude task' from which the structure of the perceived three-dimensional shape was calculated. One
group performed the task with only vision and a second group could touch the stimulus while viewing it. We found
that the consistency within the haptics+vision group was higher than in the vision-only group. Thus, haptics
decreases the visual ambiguity. Furthermore, we found that the touched shape was consistently perceived as
having more relief than the untouched the shape. It was also found that the direction of affine shear differences
within the two groups was more consistent when touch was used. We thus show that haptics has a significant
influence on the perception of pictorial relief.