The perception of space in the real world is based on multifaceted depth cues, most of them monocular, some binocular.
Developing 3D-displays raises the question, which of these depth cues are predominant and should be simulated by
computational means in such a panel. Beyond the cues based on image content, such as shadows or patterns, Stereopsis
and depth from motion parallax are the most significant mechanisms supporting observers with depth information. We
set up a carefully designed test situation, widely excluding undesired other distance hints. Thereafter we conducted a
user test to find out, which of these two depth cues is more relevant and whether a combination of both would increase
accuracy in a depth estimation task. The trials were conducting utilizing our autostereoscopic "Free2C"-displays, which
are capable to detect the user eye position and steer the image lobes dynamically into that direction. At the same time,
eye position was used to update the virtual camera's location and thereby offering motion parallax to the observer. As far
as we know, this was the first time that such a test has been conducted using an autosteresocopic display without any
assistive technologies. Our results showed, in accordance with prior experiments, that both cues are effective, however
Stereopsis is by order of magnitude more relevant. Combining both cues improved the precision of distance estimation
by another 30-40%.