Visual fatigue (asthenopia) continues to be a problem in extended viewing of stereoscopic imagery. Poorly converged imagery may contribute to this problem. In 2013, the Author reported that in a study sample a surprisingly high number of 3D feature films released as stereoscopic Blu-rays contained obvious convergence errors.1 The placement of stereoscopic image convergence can be an “artistic” call, but upon close examination, the sampled films seemed to have simply missed their intended convergence location. This failure maybe because some stereoscopic editing tools do not have the necessary fidelity to enable a 3D editor to obtain a high degree of image alignment or set an exact point of convergence. Compounding this matter further is the fact that a large number of stereoscopic editors may not believe that pixel accurate alignment and convergence is necessary. The Author asserts that setting a pixel accurate point of convergence on an object at the start of any given stereoscopic scene will improve the viewer’s ability to fuse the left and right images quickly. The premise is that stereoscopic performance (acuity) increases when an accurately converged object is available in the image for the viewer to fuse immediately. Furthermore, this increased viewer stereoscopic performance should reduce the amount of visual fatigue associated with longer-term viewing because less mental effort will be required to perceive the imagery. To test this concept, we developed special stereoscopic imagery to measure viewer visual performance with and without specific objects for convergence. The Company Team conducted a series of visual tests with 24 participants between 25 and 60 years of age. This paper reports the results of these tests.
In 2013, the Authors reported to the SPIE on the Phase 1 development of a Parallax Visualization (PV) plug-in toolset for Wide Area Motion Imaging (WAMI) data using the Pursuer Graphical User Interface (GUI).1 In addition to the ability to PV WAMI data, the Phase 1 plug-in toolset also featured a limited ability to visualize Full Motion video (FMV) data. The ability to visualize both WAMI and FMV data is highly advantageous capability for an Electric Light Table (ELT) toolset. This paper reports on the Phase 2 development and addition of a full featured FMV capability to the Pursuer WAMI PV Plug-in.
Effective use of intelligence, surveillance, and reconnaissance (ISR) data gathered by unmanned aerial vehicle (UAV) missions is vital to US Military operations. The 2006 Geospatial Intelligence (GEOINT) Basic Doctrine includes the following statements:
(a) A primary purpose of geospatial products has always been to provide visualization of operational spaces and activity patterns of all sizes and scales, ranging from global and regional level to cities and even individual buildings. (b) A picture is simply the fastest way to communicate spatial information to a customer.
Parallax Visualization (PV) technologies have been introduced, which: (1) use existing UAV sensor data, (2) provide critical alignment software tools, and (3) produce autostereoscopic (automatic depth perception) ISR work products. PV work products can be distributed across military networks and viewed on standard unaided displays. Previous evaluations have established that PV of ISR full motion video (FMV) data presents three-dimensional information in an obvious and immediate manner, thus literally adding a new dimension to the basic picture goal as set out by the GEOINT doctrine.
An important aspect in the quality of the three-dimensional perception produced by any stereoscopic production is the precision of its left/right image registration. Tests show that pixel-precise stereoscopic image registration and convergence improves viewer’s three-dimensional perception by reducing viewing discomfort.1 Current software tools generally rely on techniques like 50/50 (onionskin) for camera system and left/right post-production image alignment. Recent stereoscopic software toolset beta tests indicate that these alignment techniques make pixel-accurate registration difficult to achieve. A review of 70 stereo pairs sampled from seven stereoscopic feature films, released from 2009 to 2012, revealed that the majority lacked a precise point of convergence. Extended viewing of inaccurate stereoscopic convergence leads to viewer fatigue and discomfort.
The US Military is increasingly relying on the use of unmanned aerial vehicles (UAV) for intelligence, surveillance, and
reconnaissance (ISR) missions. Complex arrays of Full-Motion Video (FMV), Wide-Area Motion Imaging (WAMI)
and Wide Area Airborne Surveillance (WAAS) technologies are being deployed on UAV platforms for ISR applications.
Nevertheless, these systems are only as effective as the Image Analyst's (IA) ability to extract relevant information from
A variety of tools assist in the analysis of imagery captured with UAV sensors. However, until now, none has been
developed to extract and visualize parallax three-dimensional information.
Parallax Visualization (PV) is a technique that produces a near-three-dimensional visual response to standard UAV
imagery. The overlapping nature of UAV imagery lends itself to parallax visualization. Parallax differences can be
obtained by selecting frames that differ in time and, therefore, points of view of the area of interest.
PV is accomplished using software tools to critically align a common point in two views while alternately displaying
both views in a square-wave manner. Humans produce an autostereoscopic response to critically aligned parallax
information presented alternately on a standard unaided display at frequencies between 3 and 6 Hz.
This simple technique allows for the exploitation of spatial and temporal differences in image sequences to enhance
depth, size, and spatial relationships of objects in areas of interest. PV of UAV imagery has been successfully
performed in several US Military exercises over the last two years.
Under certain circumstances, conventional stereoscopic imagery is subject to being misinterpreted. Stereo
perception created from two static horizontally separated views can create a "cut out" 2D appearance for objects
at various planes of depth. The subject volume looks three-dimensional, but the objects themselves appear flat.
This is especially true if the images are captured using small disparities.
One potential explanation for this effect is that, although three-dimensional perception comes primarily from
binocular vision, a human's gaze (the direction and orientation of a person's eyes with respect to their
environment) and head motion also contribute additional sub-process information. The absence of this
information may be the reason that certain stereoscopic imagery appears "odd" and unrealistic. Another
contributing factor may be the absence of vertical disparity information in a traditional stereoscopy display.
Recently, Parallax Scanning technologies have been introduced, which provide (1) a scanning methodology,
(2) incorporate vertical disparity, and (3) produce stereo images with substantially smaller disparities than the
human interocular distances.1 To test whether these three features would improve the realism and reduce the
cardboard cutout effect of stereo images, we have applied Parallax Scanning (PS) technologies to commercial
stereoscopic digital cinema productions and have tested the results with a panel of stereo experts.
These informal experiments show that the addition of PS information into the left and right image capture
improves the overall perception of three-dimensionality for most viewers. Parallax scanning significantly
increases the set of tools available for 3D storytelling while at the same time presenting imagery that is easy and
pleasant to view.
Vision III Imaging, Inc. (the Company) has developed Parallax Image Display (PIDTM) software tools to critically
align and display aerial images with parallax differences. Terrain features are rendered obvious to the viewer when
critically aligned images are presented alternately at 4.3 Hz. The recent inclusion of digital elevation models in
geographic data browsers now allows true three-dimensional parallax to be acquired from virtual globe programs like
Google Earth. The authors have successfully developed PID methods and code that allow three-dimensional
geographical terrain data to be visualized using temporal parallax differences.
The Vision IIITM method of parallax scanning has been successfully achieved using a moving optical element (MOE) in a single lens. Unlike the lenses in our previous custom camera systems, the MOE lenses do not move. Instead, an optical element inside the lens scans a scene in a complete circle while the lens position remains fixed. V3TM MOE lenses have been effectively applied to 35 mm motion picture and broadcast video imaging. Images shot with a MOE lens provide a strong sense of dimension, realism, and stability. They can be displayed using standard motion picture projection or broadcast television equipment without the need for special screens or glasses.