PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE proceedings volume 6558, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Displays in the operational environment can be direct-view or virtual-view, and are analyzed in terms of a broad range of
performance parameters. These parameters include image area, field of view, eye-relief, weight and power, luminance and
contrast ratio, night vision goggle compatibility (type and class), resolution (pixels per inch or line pairs per milliradian),
image intensification, viewing angle, grayscale (shades or levels), dimming range, video capability (frame rate, refresh),
operating and storage altitude, operating and storage temperature range, shock and vibration limits, mean time between
failure, color vs. monochrome, and display engine technology.
This study further looks at design class: custom, versus rugged commercial, versus commercial off-the-shelf designs and
issues such as whether the design meets requirements for the operational environment and modes of use, ease of handling,
failure modes and soldier recommended upgrades.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ARINC 818 Avionics Digital Video Bus (ADVB) is a new digital video interface and protocol standard developed
especially for high bandwidth uncompressed digital video. The first draft of this standard, released in January of
2007, has been advanced by ARINC and the aerospace community to meet the acute needs of commercial aviation
for higher performance digital video. This paper analyzes ARINC 818 for use in military display systems found in
avionics, helicopters, and ground vehicles. The flexibility of ARINC 818 for the diverse resolutions, grayscales,
pixel formats, and frame rates of military displays is analyzed as well as the suitability of ARINC 818 to support
requirements for military video systems including bandwidth, latency, and reliability. Implementation issues
relevant to military displays are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an effort to reduce the effects of ambient light on the read-ability of military displays, the Naval Research Lab began
investigating and developing advanced hand-held displays. Analysis and research of display technologies with
consideration for vulnerability to environmental conditions resulted in the complete design and fabrication of the handheld
Immersive Input Display Device (I2D2) monocular. The I2D2 combines an OLED SVGA micro-display with an
optics configuration and a rubber pressure-eyecup which allows view-ability only when the eyecup is depressed. This
feature allows the I2D2 to be used during the day, while not allowing ambient light to affect the readability. It
simultaneously controls light leakage, effectively eliminating the illumination, and thus preserving the tactical position,
of the user in the dark. This paper will focus on the upgraded I2D2 system as it compares to the I2D2 presented at SPIE 2006.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Soldiers involved in urban operations are at a higher risk of receiving a bullet or fragment wound to the head or face
compared to other parts of their body. One reason for this vulnerability is the need for the soldier to expose their head
when looking and shooting from behind cover. Research conducted by DSTO Australia, using weapon-mounted
cameras, has validated the concept of off-axis shooting but has emphasized the requirement for a system that closely
integrates with both the soldier and his weapon. A system was required that would not adversely effect the usability,
utility or accuracy of the weapon. Several Concept Demonstrators were developed over a two-year period and the result
of this development is the Off-Axis Viewing Device (OAVD). The OAVD is an un-powered sighting attachment that
integrates with a red dot reflex sight and enables the soldier to scan for and engage targets from a position of cover. The
image from the weapon's scope is transmitted through the OAVD's periscopic mirror system to the soldier. Mounted
directly behind the sight, the OAVD can also be swiveled to a redundant position on the side of the weapon to allow
normal on-axis use of the sight. The OAVD can be rotated back into place behind the sight with one hand, or removed
and stored in the soldier's webbing. In May 2004, a rapid acquisition program was initiated to develop the concept to an
in-service capability and the OAVD is currently being deployed with the Australian Defence Force.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Avionics displays, particularly for cockpit applications are associated with high performance and high cost
solutions. COTS displays have well acknowledged limitations but provide a potential high value for money
solution if this performance can be stretched to a level compatible with "fit for use". This paper will describe
the initial design tradeoffs and decisions that formed the basis for development of a low-cost cockpit display
for a military helicopter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many avionics displays, particularly for cockpit applications require NVG compatibility. Unusually, the
mission definition for a new maritime helicopter has identified a need for NVG compatibility for all of the
mission-system displays, including the 20.1" diagonal, SXGA resolution Tactical Workstation Display (TWD)
located in the rear cabin. This paper will describe some design tradeoff considerations and describe both
some required and measured performance parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is critical in surveillance applications to be able to extract features in imagery that may be of interest to the viewer at
any time of the day or night. Infrared (IR) imagery is ideally suited for producing these types of images. However, even
this imagery is not always optimal. Processing the imagery with a local area image operator can enhance additional
features and characteristics in the image that provide the viewer with an improved understanding of the scene being
observed. This paper discusses the development of two algorithms for image enhancement for infrared imagery using
local area processing. The enhancement algorithm extends theory previously developed for medical applications.
Algorithm differences addressed include application to IR imagery and to a panning camera rather than still imagery. It
also discusses the obstacles encountered and overcome for insertion of this algorithm into a 10" gimbaled midwave
infrared imaging system for a variety of real-time processing applications. This technology is directly applicable to
driver's vision enhancement systems as well as other night visions systems such as night vision goggles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A design flow for implementing a dynamic gamma algorithm in an FPGA is described. Real-time video
processing makes enormous demands on processing resources. An FPGA solution offers some advantages
over commercial video chip and DSP implementation alternatives. The traditional approach to FPGA
development involves a system engineer designing, modeling and verifying an algorithm and writing a
specification. A hardware engineer uses the specification as a basis for coding in VHDL and testing the
algorithm in the FPGA with supporting electronics. This process is work intensive and the verification of the
image processing algorithm executing on the FPGA does not occur until late in the program.
The described design process allows the system engineer to design and verify a true VHDL version of the
algorithm, executing in an FPGA. This process yields reduced risk and development time. The process is
achieved by using Xilinx System Generator in conjunction with Simulink® from The MathWorks. System
Generator is a tool that bridges the gap between the high level modeling environment and the digital world of
the FPGA. System Generator is used to develop the dynamic gamma algorithm for the contrast
enhancement of a candidate display product. The results of this effort are to increase the dynamic range of
the displayed video, resulting in a more useful image for the user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Through the trade-off temporal information, a significant increase in spatial resolution is obtainable. This improvement
is quantifiable by using Airy's disc analysis against camera sensor pitch. Integrate the use of Airy's disc to quantify the
image improvement in resolvability and ultimately system range. It this comparison that sets the ground works for
realistic expectation. Our SR system is a natural tracker of moving vehicles with the addition of improved target
resolvability. Super Resolution can capitalize on camera platforms instability. A by product of SR is digitally stabilize
imagery to a fraction of a sub-pixel. Investigation in the sub-pixel remapping has lead to the developed of improved
super resolve images. Another, approach has lead to the development of a window management scheme for further
improvement. The cleaner, from a noise and structural point-of-view, the composite SR image is the more favorable it is
to high-sharpening. Mapping into a transform space greatly reduces the correlation complexity which makes it easier to
realize the complete algorithm into hardware. We have implemented this system into a real-time architecture. The
hardware configuration is composed of an FPGA and supporting processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Progress in the performance of Spatial Light Modulators (SLM), Graphical Processing Units (GPU) and off the shelf
high speed data busses have led to advances in the design of multiscopic 3D displays based on temporal multiplexing.
Having developed a proof of concept prototype capable of displaying four independent viewing zones, we report on
progress in the development of an improved system incorporating 8-12 viewing zones and a large format display. The
designs under development employ a high speed LCD shutter operating synchronously with a high speed Deformable
Mirror Device (DMD) based projector that forms multiple viewing zones via persistence of vision. Progress in the
development of the optical design and corresponding hardware and software will be reported on.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the introduction of the night-vision goggle (NVG) into vehicle cockpits, the transfer of visual information to the
observer became more complex. This problem stems primarily from the fact that the image intensifier tube
photocathode was sensitive to much of the visible spectrum. NVGs were capable of sensing and amplifying visible
cockpit light, making the observation of the scene outside of the cockpit, the primary use for NVGs, difficult if not
impossible. One solution was to establish mutually exclusive spectral bands; a band of shorter wavelengths reserved for
transmission of visible information from the cockpit instrumentation to the observer and a longer wavelength region left
to the night vision goggle for imaging the night environment. Several documents have been published outlining the
night vision imaging system (NVIS) compatible lighting performance enabling this approach, seen as necessary for
military and civilian aviation. Recent advances in short wave infrared (SWIR) sensor technology make it a possible
alternative to the image intensifiers for night imaging application. However, application-specific integration issues
surrounding the new sensor type must still be thoroughly investigated. This paper examines the impact of the SWIR
spectral sensitivity on several categories of lighting found in vehicle cockpits and explores cockpit integration issues
that may arise from the SWIR spectral sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Active matrix organic light emitting diode (AMOLED) technology is one candidate to become a low power alternative
in some applications to the currently dominant, active matrix liquid crystal display (AMLCD), technology.
Furthermore, fabrication of the AMOLED on stainless steel (SS) foil rather than the traditional glass substrate, while
presenting a set of severe technical challenges, opens up the potential for displays that are both lighter and less
breakable. Also, transition to an SS foil substrate may enable rollable displays - large when used but small for stowage
within gear already worn or carried or installed. Research has been initiated on AMOLED/SS technology and the first
320 x 240 color pixel 4-in. demonstration device has been evaluated in the AFRL Display Test and Evaluation
Laboratory. Results of this evaluation are reported along with a research roadmap.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A characterization was performed on a monochrome, low-information content polymeric light emitting diode (PLED)
display to determine the effects of ruggedization for military display applications. A summary of the environmental,
mechanical, and optical characterization results show that a unique direct bonding method and night vision imaging
system (NVIS) filter material can be used to ruggedize commercial-off-the-shelf (COTS) PLED displays to operate in
demanding military environments. Significant enhancements to a COTS PLED device are discussed in terms of impact
resistance, enhanced sunlight readability, and compatibility with night vision operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volumetric displays allow users to view freely three-dimensional (3D) imagery without special eyewear. However,
due to low display resolution, many colors appear distorted compared to their representation on a flat-panel display. In
addition, due to the unique nature of the display, some shapes, objects, and orientations can also appear distorted. This
study examines the perceptual range of virtual objects in a Perspecta 3D volumetric display to determine which
combination of object type, size, and color produces the best 3D image. Participants viewed combinations composed of
three object types (vertical square plane, empty cube, filled cube) x three sizes (small, medium, large) x seven colors
(aqua, blue, green, purple, red, white, yellow). They named the color of the object and then rated the uniformity of the
color, the quality of the shape, amount of visual flicker, and the solidity of the object. All dependent measures except
the rating of solidity exhibited various main and interaction effects among object type, size, and color.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been much research on many different aspects of image quality for 2D displays. These range from objective
type metrics (e.g, luminance contrast, saturation contrast, resolution, etc.) to more subjective metrics (e.g,. "Rate the
quality of the display from 1 - 5"), to metrics in between (subjective-objective) in which observers are asked to perform
a task and their performance determines the "goodness" of the display. We would like to start identifying these similar
types of metrics for 3D displays. In this case many of the traditional metrics do not work. We first discuss some of the
more objective metrics including system specifications and measurable data. Secondly, we discuss both subjective (e.g.,
rating measures) and subjective-objective (e.g., experimental task) metrics that have been used in the past, and how well
they may work for our situation. We also discuss developing new metrics of these types. We finally discuss what we feel
is the way forward in the hopes of generating discussion for future research to help display manufacturers in their
endeavors for designing new and innovative 3D displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Display Metrics and Human Factors and Data Security
The visual images of the natural world, with their immediate intuitive appeal, seem like the logical gold standard for
evaluating displays. After all, since photorealistic displays look so increasingly like the real world, what could be
better? Part of the shortcoming of this intuitive appeal for displays is its naivete. Realism itself is full of potential
illusions that we do not notice because, most of the time, realism is good enough for our everyday tasks. But when
confronted with tasks that go beyond those for which our visual system has evolved, we may be blindsided. If we
survive, blind to our erroneous perceptions and oblivious to our good fortune at having survived, we will not be any
wiser next time.
Realist displays depend on linear perspective (LP), the mathematical mapping of three dimensions onto two.
Despite the fact that LP is a seductively elegant system that predicts results with defined mathematical procedures,
artists do not stick to the procedures, not because they are math-phobic but because LP procedures, if followed
explicitly, produce ugly, limited, and distorted images. If artists bother with formal LP procedures at all, they
invariably temper the renderings by eye.
The present paper discusses LP assumptions, limitations, and distortions. It provides examples of kluges to cover
some of these LP shortcomings. It is important to consider the limitations of LP so that we do not let either naive
assumptions or the seductive power of LP guide our thinking or expectations unrealistically as we consider its
possible uses in advanced visual displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the foundations of radiometry and photometry, based on Second Principle of Thermodynamics are
discussed, in terms of brightness (luminance), and etendue (Lagrange invariant) limitations of integrated lighting
systems. In such a case, the brightness is defined as phase-space-density, and other radiometric/photometric quantities
such as emittance, exitance, or irradiance/illuminance, power/flux, and radiant/luminant intensity, are also discussed,
including examples of integrated lighting systems. Also, technologic progress at Luminit is reviewed, including 3D-microreplication
of new non-diffuser microscopic structures by roll-to-roll web technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To facilitate decision making tasks it is necessary to be able to "see" the situation. An enormous array of intelligence
gathering, database, and sensor sources of information are available. Methods for visualizing the information must be
established and information presented in such a way that human attention is captured and maintained on the most critical
aspects of the information. Visualizations need to adapt to the changing circumstances to show the most relevant
information at that time. We are developing a system called Holistic Analysis, Visualization, & Characterization
Assessment Tool (HAVCAT) that uses intelligent agents that interact with the user to provide the correct information at
the right time. This cutting edge system will enable visualization researchers to investigate techniques for adjusting
visualizations based on user performance HAVCAT will employ domain ontologies to determine relationships within
the data. The HAVCAT evidence reasoning agent distills the data and extracts the most pertinent actions or
consequences. This paper describes the HAVCAT concepts and also research issues related to development of
HAVCAT and techniques for directing user attention.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents a novel method for optical decrypted key production by screen printing technology. The key is
mainly used to decrypt encoded information hidden inside documents containing Moire patterns and integral
photographic 3D auto-stereoscopic images as a second-line security file. The proposed method can also be applied as an
anti-counterfeiting measure in artistic screening. Decryption is performed by matching the correct angle between the
decoding key and the document with a text or a simple geometric pattern. This study presents the theoretical analysis and
experimental results of the decoded key production by the best parameter combination of Moire pattern size and screen
printing elements. Experimental results reveal that the proposed method can be applied in anti-counterfeit document
design for the fast and low-cost production of decryption key.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel performance-enhanced computational integral imaging reconstruction (CIIR) system by
use of elemental image array (EIA) obtained by using a simultaneous pickup scheme of far three-dimensional (3-D)
objects from the lenslet array in both real and virtual image fields. In the proposed system, an imaging lens between
lenslet array and 3-D objects to overcome limitation of pickup range is introduced, the EIA through additional imaging
lens is recorded with image sensor, and the pickuped EIA is reconstructed by use of CIIR technique. Additional imaging
lens produces an image shift effect of 3-D objects located far away from the lenslet array. To show the usefulness of the
proposed system, some experiments are carried out for real 3-D objects and its results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.