When an object is closer to an observer than the background, the small differences between right and left eye views are
interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core
of all binocular three-dimensional (3-D) perception and related technological display development. To achieve
stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched,
then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole
story. At every object-background boundary, there are regions of the background that only one eye can see because, in
the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a
corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I
will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose
problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the
relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth
from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular
zones, for stereo display technology and depth compression algorithms.
Stereoscopic depth is often included in the design of tele-operation or Virtual Reality (VR) systems, with the expectation that it will enhance a participant’s feeling of presence in a scene, and improve perceptual accuracy. Our aim here was to test the latter assertion: is human stereoscopic depth perception accurate? We examined how well humans can use stereoscopic information to perceive and respond to a simple object undergoing three dimensional (3-D) motion. Observers viewed a scene containing a stationary reference point and a target point that moved towards them in depth, along a range of trajectories, to the left or right of straight towards their nose. How good should performance be? Simple geometry can be used to show that the average and difference of the left and right eye’s projections can be used to estimate trajectory angles. How good is human performance? In several different tasks, results suggested that although observers could distinguish between different trajectories precisely, their accuracy of perception was very poor. Angles were perceived as up to 3-5 times wider than was physically specified. This suggests that stereoscopic depth does not provide accurate perception in simple environments and has implications for the design of 3-D Virtual Environments.
Does the addition of stereoscopic depth aid steering--the perceptual control of locomotor heading--around an environment? This is a critical question when designing a tele-operation or Virtual Environment system, with implications for computational resources and visual comfort. We examined the role of stereoscopic depth in the perceptual control of heading by employing an active steering task. Three conditions were tested: stereoscopic depth; incorrect stereoscopic depth and no stereoscopic depth. Results suggest that stereoscopic depth does not improve performance in a visual control task. A further set of experiments examined the importance of a ground plane. As a ground plane is a common feature of all natural environments and provides a pictorial depth cue, it has been suggested that the visual system may be especially attuned to exploit its presence. Thus it would be predicted that a ground plane would aid judgments of locomotor heading. Results suggest that the presence of rich motion information in the lower visual field produces significant performance advantages and that provision of such information may prove a better target for system resources than stereoscopic depth. These findings have practical consequences for a system designer and also challenge previous theoretical and psychophysical perceptual research.