Visual search is a task that is carried out in a number of important security and health related scenarios (e.g., X-ray baggage screening, radiography). With recent and ongoing developments in the technology available to present images to observers in stereoscopic depth, there has been increasing interest in assessing whether depth information can be used in complex search tasks to improve search performance. Here we outline the methodology that we developed, along with both software and hardware information, in order to assess visual search performance in complex, overlapping stimuli that also contained depth information. In doing so, our goal is to foster further research along these lines in the future. We also provide an overview with initial results of the experiments that we have conducted involving participants searching stimuli that contain overlapping objects presented on different depth planes to one another. Thus far, we have found that depth information does improve the speed (but not accuracy) of search, but only when the stimuli are highly complex and contain a significant degree of overlap. Depth information may therefore aid real-world search tasks that involve the examination of complex, overlapping stimuli.
Background In recent years 3D-enabled televisions, VR headsets and computer displays have become more readily available in the home. This presents an opportunity for game designers to explore new stereoscopic game mechanics and techniques that have previously been unavailable in monocular gaming. Aims To investigate the visual cues that are present in binocular and monocular vision, identifying which are relevant when gaming using a stereoscopic display. To implement a game whose mechanics are so reliant on binocular cues that the game becomes impossible or at least very difficult to play in non-stereoscopic mode. Method A stereoscopic 3D game was developed whose objective was to shoot down advancing enemies (the Interlopers) before they reached their destination. Scoring highly required players to make accurate depth judgments and target the closest enemies first. A group of twenty participants played both a basic and advanced version of the game in both monoscopic 2D and stereoscopic 3D. Results The results show that in both the basic and advanced game participants achieved higher scores when playing in stereoscopic 3D. The advanced game showed that by disrupting the depth from motion cue the game became more difficult in monoscopic 2D. Results also show a certain amount of learning taking place over the course of the experiment, meaning that players were able to score higher and finish the game faster over the course of the experiment. Conclusions Although the game was not impossible to play in monoscopic 2D, participants results show that it put them at a significant disadvantage when compared to playing in stereoscopic 3D.
This special section was made possible by the enthusiasm of the research community in stereoscopic displays and applications, a level of interest that has now sustained the associated SPIE/IS&T Conference into its 25th year. This is a noteworthy achievement for such a young field in which there is now a depth and breadth of research that sustains a vibrant international group of researchers.
KEYWORDS: Visualization, 3D displays, Stereoscopic displays, Eye, 3D image processing, Sensors, Visual process modeling, Visual system, Information visualization, Displays
There has been much research concerning visual depth perception in 3D stereoscopic displays and, to a lesser extent, auditory depth perception in 3D spatial sound systems. With 3D sound systems now available in a number of different forms, there is increasing interest in the integration of 3D sound systems with 3D displays. It therefore seems timely to review key concepts and results concerning depth perception in such display systems. We first present overviews of both visual and auditory depth perception, before focussing on cross-modal effects in audio-visual depth perception, which may be of direct interest to display and content designers.
We report on a new game design where the goal is to make the stereoscopic depth cue sufficiently critical to success that game play should become impossible without using a stereoscopic 3D (S3D) display and, at the same time, we investigate whether S3D game play is affected by screen size.
Before we detail our new game design we review previously unreported results from our stereoscopic game research over the last ten years at the Durham Visualisation Laboratory. This demonstrates that game players can achieve significantly higher scores using S3D displays when depth judgements are an integral part of the game.
Method: We design a game where almost all depth cues, apart from the binocular cue, are removed. The aim of the game is to steer a spaceship through a series of oncoming hoops where the viewpoint of the game player is from above, with the hoops moving right to left across the screen towards the spaceship, to play the game it is essential to make decisive depth judgments to steer the spaceship through each oncoming hoop. To confound these judgements we design altered depth cues, for example perspective is reduced as a cue by varying the hoop's depth, radius and cross-sectional size.
Results: Players were screened for stereoscopic vision, given a short practice session, and then played the game in both 2D and S3D modes on a seventeen inch desktop display, on average participants achieved a more than three times higher score in S3D than they achieved in 2D. The same experiment was repeated using a four metre S3D projection screen and similar results were found.
Conclusions: Our conclusion is that games that use the binocular depth cue in decisive game judgements can benefit significantly from using an S3D display. Based on both our current and previous results we additionally conclude that display size, from cell-phone, to desktop, to projection display does not adversely affect player performance.
The creation of binocular images for stereoscopic display has benefited from significant research and commercial
development in recent years. However, perhaps surprisingly, the effect of adding 3D sound to stereoscopic images
has rarely been studied. If auditory depth information can enhance or extend the visual depth experience it
could become an important way to extend the limited depth budget on all 3D displays and reduce the potential
for fatigue from excessive use of disparity.
Objective: As there is limited research in this area our objective was to ask two preliminary questions. First
what is the smallest difference in forward depth that can be reliably detected using 3D sound alone? Second
does the addition of auditory depth information influence the visual perception of depth in a stereoscopic image?
Method: To investigate auditory depth cues we use a simple sound system to test the experimental hypothesis
that: participants will perform better than chance at judging the depth differences between two speakers a set
distance apart. In our second experiment investigating both auditory and visual depth cues we setup a sound
system and a stereoscopic display to test the experimental hypothesis that: participants judge a visual stimulus
to be closer if they hear a closer sound when viewing the stimulus.
Results: In the auditory depth cue trial every depth difference tested gave significant results demonstrating
that the human ear can hear depth differences between physical sources as short as 0.25 m at 1 m. In our trial
investigating whether audio information can influence the visual perception of depth we found that participants
did report visually perceiving an object to be closer when the sound was played closer to them even though the
image depth remained unchanged.
Conclusion: The positive results in the two trials show that we can hear small differences in forward depth
between sound sources and suggest that it could be practical to extend the apparent depth in a stereoscopic
image by using 3D sound, providing a controlled way to compensate for the depth budget limits on 3D displays.
Context: Stereoscopic 3D movies are gaining rapid acceptance commercially. In addition our previous experience
with the short 3D movie "Cosmic Cookery" showed that there is great public interest in the presentation of
cosmology research using this medium.
Objective: The objective of the work reported in this paper was to create a three-dimensional stereoscopic
movie describing the life of the Milky way galaxy. This was a technical and artistic exercise to take observed and
simulated data from leading scientists and produce a short (six minute) movie that describes how the Milky Way
was created and what happens in its future. The initial target audience was the visitors to the Royal Society's
2009 Summer Science Exhibition in central London, UK. The movie is also intended to become a presentation
tool for scientists and educators following the exhibition.
Apparatus: The presentation and playback systems used consisted of off-the shelf devices and software. The
display platform for the Royal Society presentation was a RealD LP Pro switch used with a DLP projector to
rear project a 4 metre diagonal image. The LP Pro enables the use of cheap disposable linearly polarising glasses
so that the high turnover rate of the audience (every ten minutes at peak times) could be sustained without
needing delays to clean the glasses. The playback system was a high speed PC with an external 8Tb RAID
driving the projectors at 30Hz per eye, the Lightspeed DepthQ software was used to decode and generate the
video stream.
Results: A wide range of tools were used to render the image sequences, ranging from commercial to custom
software. Each tool was able to produce a stream of 1080p images in stereo at 30fps. None of the rendering
tools used allowed precise calibration of the stereo effect at render time and therefore all sequences were tuned
extensively in a trial and error process until the stereo effect was acceptable and supported a comfortable viewing
experience.
Conclusion: We conclude that it is feasible to produce high quality 3D movies using off-the shelf equipment
if care is taken to control the stereoscopic quality throughout the production process.
KEYWORDS: Eye, Visualization, Light emitting diodes, Calibration, 3D acquisition, Information visualization, Visual system, 3D displays, Photography, Camera shutters
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual
scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We
report data from three psychological experiments investigating human binocular coordination during visual processing of
stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a
stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the
other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic
presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to
saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's
Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades
between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth
appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations
on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate
that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are
computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli,
but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during
saccades as well as fixations.
Context: The idea behind stereoscopic displays is to create the illusion of depth and this concept could have
many practical applications. A common spatial ability test involves mental rotation. Therefore a mental
rotation task should be easier if being undertaken on a stereoscopic screen.
Aim: The aim of this project is to evaluate stereoscopic displays (3D screen) and to assess whether they are
better for performing a certain task than over a 2D display. A secondary aim was to perform a similar study
but replicating the conditions of using a stereoscopic mobile phone screen.
Method: We devised a spatial ability test involving a mental rotation task that participants were asked to
complete on either a 3D or 2D screen. We also design a similar task to simulate the experience on a
stereoscopic cell phone. The participants' error rate and response times were recorded. Using statistical
analysis, we then compared the error rate and response times of the groups to see if there were any
significant differences.
Results: We found that the participants got better scores if they were doing the task on a stereoscopic screen
as opposed to a 2D screen. However there was no statistically significant difference in the time it took them
to complete the task. We also found similar results for 3D cell phone display condition.
Conclusions: The results show that the extra depth information given by a stereoscopic display makes it
easier to mentally rotate a shape as depth cues are readily available. These results could have many useful
implications to certain industries.
Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control
function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms
do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement,
camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames
(shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of
stereoscopic imaging approaches that aim to address this problem.
(1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance.
The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm:
map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method
that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an
existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect
to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far
away from the focus plane are blurred.
We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth
quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate
that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can
cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static
stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth
mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected
improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of
particular interest to 3D filmmaking and real time computer games.
We are interested in metrics for automatically predicting the compression settings for stereoscopic images so that we can minimize file size, but still maintain an acceptable level of image quality. Initially we investigate how Peak Signal to Noise Ratio (PSNR) measures the quality of varyingly coded stereoscopic image pairs. Our results suggest that symmetric, as opposed to asymmetric stereo image compression, will produce significantly better results. However, PSNR measures of image quality are widely criticized for correlating poorly with perceived visual quality. We therefore consider computational models of the Human Visual System (HVS) and describe the design and implementation of a new stereoscopic image quality metric. This, point matches regions of high spatial frequency between the left and right views of the stereo pair and accounts for HVS sensitivity to contrast and luminance changes at regions of high spatial frequency, using Michelson's Formula and Peli's Band Limited Contrast Algorithm. To establish a baseline for comparing our new metric with PSNR we ran a trial measuring stereoscopic image encoding quality with human subjects, using the Double Stimulus Continuous Quality Scale (DSCQS) from the ITU-R BT.500-11 recommendation. The results suggest that our new metric is a better predictor of human image quality preference than PSNR and could be used to predict a threshold compression level for stereoscopic image pairs.
Desktop 3D displays vary in their optical design and this results in a significant variation in the way in which
stereo images are physically displayed on different 3D displays. When precise depth judgements need to be made
these differences may become critical to task performance. Applications where this is a particular issue include
medical imaging, geoscience and scientific visualization.
We investigate perceived depth thresholds for four classes of desktop 3D display; full resolution, row interleaved,
column interleaved and colour-column interleaved. Given the same input image resolution we calculate
the physical view resolution for each class of display to geometrically predict its minimum perceived depth
threshold.
To verify our geometric predictions we present the design of a task where viewers are required to judge
which of two neighboring squares lies in front of the other. We report results from a trial using this task where
participants are randomly asked to judge whether they can perceive one of four levels of image disparity (0,2,4 and
6 pixels) on seven different desktop 3D displays. The results show a strong effect and the task produces reliable
results that are sensitive to display differences. However, we conclude that depth judgement performance cannot
always be predicted from display geometry alone. Other system factors, including software drivers, electronic
interfaces, and individual participant differences must also be considered when choosing a 3D display to make
critical depth judgements.
This paper describes our experience making a short stereoscopic
movie visualizing the development of structure in the universe
during the 13.7 billion years from the Big Bang to the present day.
Aimed at a general audience for the Royal Society's 2005 Summer
Science Exhibition, the movie illustrates how the latest
cosmological theories based on dark matter and dark energy are
capable of producing structures as complex as spiral galaxies and
allows the viewer to directly compare observations from the real
universe with theoretical results. 3D is an inherent feature of the
cosmology data sets and stereoscopic visualization provides a
natural way to present the images to the viewer, in addition to
allowing researchers to visualize these vast, complex data sets.
The presentation of the movie used passive, linearly polarized
projection onto a 2m wide screen but it was also required to
playback on a Sharp RD3D display and in anaglyph projection at
venues without dedicated stereoscopic display equipment.
Additionally lenticular prints were made from key images in the
movie. We discuss the following technical challenges during the
stereoscopic production process; 1) Controlling the depth
presentation, 2) Editing the stereoscopic sequences, 3) Generating
compressed movies in display specific formats.
We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we
believe these overheads are worthwhile when presenting inherently 3D data as the result is significantly increased impact and better understanding of complex 3D scenes.
We believe the need for stereoscopic image generation methods that allow simple, high quality content creation continues to be a key problem limiting the widespread up-take of 3D displays. We present new algorithms for creating real time stereoscopic images that provide increased control to content creators over the mapping of depth from scene to displayed image. Previously we described a Three Region, variable depth mapping, algorithm for stereoscopic image generation. This allows different regions within a scene to be represented by different ranges of perceived depth in the final image. An unresolved issue was that this approach can create a visible discontinuity for smooth objects crossing region boundaries. In this paper we describe two new Multi-Region algorithms to address this problem: boundary smoothing using additional sub-regions and scaling scene geometry to smoothly vary depth mapping. We present real time implementations of the Three-Region and the new Multi-Region algorithms for OpenGL to demonstrate the visual appearance of the results. We discuss the applicability and performance of each approach for rendering real time stereoscopic images and propose a simple modification to the standard graphics pipeline to better support these algorithms.
The usable perceived depth range of a stereoscopic 3D display is limited by human factors considerations to a defined range around the screen plane. There is therefore a need in stereoscopic image creation to map depth from the scene to a target display without exceeding these limits. Recent image capture methods provide precise control over this depth mapping but map a single range of scene depth as a whole and are unable to give preferential stereoscopic representation to a particular region of interest in the scene. A new approach to stereoscopic image creation is described that allows a defined region of interest in scene depth to have an improved perceived depth representation compared to other regions of the scene. For example in a game this may be the region of depth around a game character, or in a scientific visualization the region around a particular feature of interest. To realize this approach we present a novel algorithm for stereoscopic image capture and describe an implementation for the widely used ray-tracing package POV-Ray. Results demonstrate how this approach provides content creators with improved control over perceived depth representation in stereoscopic images.
Stereoscopic images are hard to get right, and comfortable images are often only produced after repeated trial and error. The main difficulty is controlling the stereoscopic camera parameters so that the viewer does not experience eye strain or double images from excessive perceived depth. Additionally, for head tracked displays, the perceived objects can distort as the viewer moves to look around the displayed scene. We describe a novel method for calculating stereoscopic camera parameters with the following contributions: (1) Provides the user intuitive controls related to easily measured physical values. (2) For head tracked displays; necessarily ensures that there is no depth distortion as the viewer moves. (3) Clearly separates the image capture camera/scene space from the image viewing viewer/display space. (4) Provides a transformation between these two spaces allowing precise control of the mapping of scene depth to perceived display depth. The new method is implemented as an API extension for use with OpenGL, a plug-in for 3D Studio Max and a control system for a stereoscopic digital camera. The result is stereoscopic images generated correctly at the first attempt, with precisely controlled perceived depth. A new analysis of the distortions introduced by different camera parameters was undertaken.
This paper presents an examination of the requirements for observer tracking autostereoscopic 3D display systems. The optical requirements for the imaging of autostereoscopic viewing windows in order to maintain high image quality over a large range of observer positions are described. A number of novel displays based on LCD (liquid crystal display) technology have been developed and demonstrated at Sharp Laboratories of Europe Ltd (SLE). This includes an electronically switchable illuminator for the macro-optic twin-LCD display; and a compact micro-optic twin-LCD display which maintains image quality while extending display size and viewing freedom. Work has also been in progress with flat panel displays to improve window quality using a new arrangement of LCD pixels. This has led to a new means to track such a display with no moving parts.
This paper presents a new autostereoscopic display system based on conventional Thin Film Transistor Liquid Crystal Display technology giving bright, high quality, full color and high resolution 3D images over a wide viewing range without special glasses. In addition, 3D image look-around and multiple viewers are possible. Methods of obtaining improved image quality are described as well as interfacing with conventional video and computer image generation sources. The system is suitable for a number of professional and domestic 3D applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.