PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6557, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This year marks the 35th anniversary of the Visually Coupled Systems symposium held at Brooks Air Force Base, San
Antonio, Texas in November of 1972. This paper uses the proceedings of the 1972 VCS symposium as a guide to
address several topics associated primarily with helmet-mounted displays, systems integration and the human-machine
interface. Specific topics addressed include monocular and binocular helmet-mounted displays (HMDs), visor
projection HMDs, color HMDs, system integration with aircraft windscreens, visual interface issues and others. In
addition, this paper also addresses a few mysteries and irritations (pet peeves) collected over the past 35+ years of
experience in the display and display related areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
eMagin Corporation has recently developed long-life OLED-XL devices for use in their AMOLED microdisplays for
head-worn applications. AMOLED displays have been known to exhibit high levels of performance with regards to
contrast, response time, uniformity, and viewing angle, but a lifetime improvement has been perceived to be essential for
broadening the applications of OLED's in the military and in the commercial market. The new OLED-XL devices gave
the promise of improvements in usable lifetime over 6X what the standard full color, white, and green devices could
provide. The US Army's RDECOM CERDEC NVESD performed life tests on several standard and OLED-XL panels
from eMagin under a Cooperative Research and Development Agreement (CRADA). Displays were tested at room
temperature, utilizing eMagin's Design Reference Kit driver, allowing computer controlled optimization, brightness
adjustment, and manual temperature compensation. The OLED Usable Lifetime Model, developed under a previous
NVESD/eMagin SPIE paper presented at DSS 2005, has been adjusted based on the findings of these tests. The result is
a better understanding of the applicability of AMOLEDs in military and commercial head mounted systems: where good
fits are made, and where further development might be needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In military aviation, head tracker technologies have become increasingly important to track the pilot's head position and
orientation, allowing the user to quickly interact with the operational environment. This technology allows the pilot to
quickly acquire items of interest and see Fighter Data Link type information. Acquiring the target on a helmet-mounted
tracker/display which can automatically slew a weapon's seeker is far more efficient than having to point at the target
with the nose of the aircraft as previously required for the heads-up display (HUD) type of target acquisition. The
United States Air Force (USAF) has used and evaluated a variety of helmet-mounted trackers for incorporation into their
high performance aircrafts. The Dynamic Tracker Test Fixture (DTTF) was designed by the Helmet-Mounted Sensory
Technology (HMST) laboratory to accurately measure rotation in one plane both static and dynamic conditions for the
purpose of evaluating the accuracy of head trackers, including magnetic, inertial, and optical trackers. This paper
describes the design, construction, capabilities, limitations, and performance of the DTTF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The United States Air Force (USAF) uses and evaluates a variety of helmet-mounted trackers for incorporation into their
high performance aircraft. The primary head tracker technologies commercially available are magnetic trackers, inertial
trackers, and optical trackers. Each head tracker has a unique method of determining the pilot's head position within the
cockpit of the aircraft. Magnetic trackers generally have a small head mounted size and minimal head weight. Because
they sense a generated magnetic field, their accuracy can be affected by other magnetic fields or ferrous components
within the cockpit. Inertial trackers cover the entire head motion box but require constant motion in order to
accommodate drifting of the inertial sensors or a secondary system that updates the inertial system, often referred to as a
hybrid system. Although optical head trackers (OHT) are immune to magnetic fields some of their limitations may be
daylight/night vision goggle (NVG) compatibility issues and, depending on system configuration, may require numerous
emitters and/or receivers to cover a large head motion box and provide a wide field of regard. The Dynamic Tracker
Test Fixture (DTTF) was designed by the Helmet Mounted Sensory Technology (HMST) laboratory to accurately
measure azimuth rotation in both static and dynamic conditions for the purpose of determining the accuracy of a variety
of head trackers. Before the DTTF could be used as an evaluation tool, it required characterization to determine the
amount and location of any induced elevation or roll as the table rotated in azimuth. Optimally, the characterization
method would not affect the DTTF's movement so a non-contact method was devised. This paper describes the
characterization process and its results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The patent pending I-PORTTM is a highly versatile, hands-free, low profile near-eye display system. It was
originally designed with the medical market in mind as a data (monocular) or surgical (binocular) head
worn display. The concept takes advantage of a technique used with surgical loupes where they are
sometimes mounted "into" eyeglasses. The I-PORTTMdisplay module is similarly mounted onto or into the
spectacle lens of protective eyewear or sunglasses.
The I-PORTTM is capable of various fields of view and resolutions while being low profile providing
minimal obscuration. It is an ideal remote viewer for medical, military and commercial equipment. Our
system is capable of producing fields of view greater than 50 degrees in full color and can incorporate
either organic light emitting diode, (OLED) or active matrix liquid crystal display (AMLCD) image sources
of various resolutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New underwater computer systems have the potential to provide military divers in operational scenarios with the
processing power of laptop or desktop computers. While this computing capability is greatly advancing, heads-up
displays (HUDs) currently integrated into dive masks are capable of presenting only limited amounts of operational data.
Diver situational awareness can be greatly improved by providing increased imagery for accessing and utilizing all of the
processing in these next generation dive computers. In an effort to improve operational efficiency in diver scenarios by
providing an enhanced display, the Naval Research Lab leveraged technologies developed for the Immersive Input
Display Device (I2D2) in the development of the Integrated Diver Display Device (ID3). The ID3 leverages an organic
light emitting diode (OLED) micro-display combined with a magnifying optic to provide a full color SVGA solution
within the dive mask without dramatically impacting the diver's line of sight (LOS). By not obstructing LOS, the diver
maintains his forward vision and environmental awareness while gaining access to critical situational awareness data.
This paper will examine the development and capabilities of the ID3 for dive applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall
training to maintain the superiority of our forces. This training must incorporate realistic targets;
backgrounds; and representative atmospheric and weather effects they may encounter under operational
conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation
Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with
both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual
phenomena which might be experienced when utilizing night vision goggles. With this technology, the
military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous
environmental conditions that will be experienced in their NVG training flights.
A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather
effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology)
system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation
software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather
effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research &mgr;
Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model.
Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke,
chemical releases) are being calculated and injected into the scene observed through the NVG via the
fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D
Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG -
WDT. The 3D virtual reality software is a complete simulation system to generate realistic target -
background scenes and display the results in a DirectX environment.
This paper will describe our approach and show a brief demonstration of the software capabilities. The
work is supported by the SBIR program under contract N61339-06-C-0113.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the Army increases its reliance upon helmet mounted displays (HMD), it is paramount that HMDs are developed that meet the operational needs of the warfighter. In see-through HMDs, symbology is overlaid or added to the see-through background. For the symbology to be seen and understood, the symbology must have sufficient contrast to stand-out from the background and be clearly recognized. In an earlier paper, we showed that the quality of see-through symbology was greatly influenced by the complexity of natural backgrounds. Complexity was characterized by the standard deviation of small patches (patches subtending about 1.5°). A better assessment of scene complexity as a function of local contrast is required for the development of HMD luminance specifications. Here we evaluate the small patch luminance and complexity of natural scenes in an attempt to quantify the luminance requirements for HMDs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous studies have shown that helmet-mounted displays (HMDs) are advantageous in maintaining situation awareness and increasing the amount of time pilots spend looking off-boresight (Geiselman & Osgood, 1994; Geiselman & Osgood, 1995). However, space is also limited on a HMD and any symbology that is presented takes up valuable space and can occlude a pilot's vision. There has been much research in the area of visual cueing and visual search as they relate to seeking out visual targets in the sky. However, the idea of localized auditory cueing, as it could apply in the realm of air-to-air targeting, is an area less studied. One question is how can we present information such that a pilot's attention will be directed to the object of interest the most quickly? Some different types of target location cueing symbology have been studied to find such aspects of symbology that will aid a pilot most in acquiring a target. The purpose of this study is to determine the best method of cueing a person to visual targets in the shortest amount of time possible using auditory and visual cues in combination. Specifically, participants were presented with different combinations of reflected line cues, standard line cues, and localized auditory cues for primary and secondary targets. The cues were presented using an HMD and 3-D auditory headphones, with a magnetic head tracker used to determine when the participant had visually acquired the targets. The possible benefits of these cues based on the times to acquire are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Helmet-mounted displays (HMDs) offer the distinct advantage of wearable computing while accomplishing a variety of physical tasks, such as piloting an aircraft, navigating difficult or unfamiliar terrain, performing surgery, etc. However, problems can arise involving the HMD eyepieces, such that they may either block portions of the far visual field, draw attention away from it, or both. In the present experiments, placement of a monocular HMD eyepiece in the visual field was manipulated to examine its effects on dynamic visual search performance in the far-field environment. In Experiment 1, a pre-attentive task was presented on the HMD to investigate possible dual-task decrements. In Experiment 2, either an endogenous (arrow) or exogenous (circle) cue was presented on the HMD to guide visual search to the location of the target. The results from Experiment 1 show that only one of three participants was able to perform the pre-attentive task on the HMD without harming primary task performance and that for only this participant, eyepiece placement altered dual-task performance. The results from Experiment 2 show that both endogenous and exogenous visual search cues were effective at reducing response times in both eyepiece positions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The perceptual and performance effects of viewing HMD sensor-offset video were investigated in a series of small
studies and demonstrations. A sensor-offset simulator was developed with three sensor positions relative to left-eye
viewing: inline and forward, temporal and level, and high and centered. Several manual tasks were used to test the effect
of sensor offset: card sorting, blind pointing and open-eye pointing. An obstacle course task was also used, followed by a
more careful look at avoiding specific obstacles. Once the arm and hand were within the sensor field of view, the user
demonstrated the ability to readily move to the target regardless of the sensor offset. A model of sensor offset was
developed to account for these results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of currently proposed helmet-mounted display (HMD) designs relocate image intensification (I2) tubes to
the sides of the helmet. Such a design approach induces a visual condition referred to as hyperstereo vision (or
hyperstereopsis). This condition manifests itself to the user as an exaggerated sense of depth perception, causing
near- to mid-range objects to appear closer than they actually are. Hyperstereopsis is potentially a major concern for
helicopter operations that are conducted at low altitudes. As part of a limited flight study to investigate this
phenomenon, five rated U.S. Army aviators, as technical observers, wore a hyperstereo HMD during the conduct of
a series if 13 standard maneuvers. Two subject aviators acquired a total of eight hours and three aviators a single
hour of flight. Using a post-flight questionnaire, these aviators were asked to compare their visual experiences to
that of normal I2-aided flight. Depth perception at distances below 300 feet was identified as the greatest challenge.
The two 8-hour aviators reported a 5-8 hour "adaptation" period for most maneuvers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern helmet-mounted night vision devices, such as the Thales TopOwl helmet, project imagery from intensifiers
mounted on the side of the helmet onto the helmet faceplate. The increased separation of the cameras induces
hyperstereopsis - the exaggeration of the stereoscopic disparities that support the perception of relative depth around the
point of fixation. Increased camera separation may also affect absolute depth perception, because it increases the amount
of vergence (crossing) of the eyes required for binocular fusion, and because the differential perspective from the
viewpoints of the two eyes is increased. The effect of hyperstereopsis on the perception of absolute distance was
investigated using a large-scale stereoscopic display system. A fronto-parallel textured surface was projected at a
distance of 6 metres. Three stereoscopic viewing conditions were simulated - hyperstereopsis (four times magnification),
normal stereopsis, and hypostereopsis (one quarter magnification). The apparent distance of the surface was measured
relative to a grid placed in a virtual "leaf room" that provided rich monocular cues, such as texture gradients and linear
perspective, to absolute distance as well as veridical sterescopic disparity cues. The different stereoscopic viewing
conditions had no differential effect on the apparent distance of the textured surface at this viewing distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern helmet-mounted night vision devices, such as the Thales TopOwl helmet, project imagery from intensifiers
mounted on the sides of the helmet onto the helmet faceplate. This produces a situation of hyperstereopsis in which
binocular disparities are magnified. This has the potential to distort the perception of slope in depth (an important cue to
landing), because the slope cue provided by binocular disparity conflicts with veridical cues to slope, such as texture
gradients and motion parallax. In the experiments, eight observers viewed sparse and dense textured surfaces tilted in
depth under three viewing conditions: normal stereo hyper-stereo (4 times magnification), and hypostereo (1/4
magnification). The surfaces were either stationary, or rotated slowly around a central vertical axis. Stimuli were
projected at 6 metres to minimise conflict between accommodation and convergence, and stereo viewing was provided
by a Z-screen and passive polarised glasses. Observers matched perceived visual slope using a small tilt table set by
hand. We found that slope estimates were distorted by hyperstereopsis, but to a much lesser degree than predicted by
disparity magnification. The distortion was almost completely eliminated when motion parallax was present.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The timely detection of terrain drop-offs is critical for safe and efficient off-road mobility, whether with human drivers
or with terrain navigation systems that use autonomous machine-vision. In this paper, we propose a joint tracking and
detection machine-vision approach for accurate and efficient terrain drop-off detection and localization. We formulate
the problem using a hyperstereo camera system and build an elevation map using the range map obtained from a stereo
algorithm. A terrain drop-off is then detected with the use of optimal drop-off detection filters applied to the range
map. For more robust results, a method based on multi-frame fusion of terrain drop-off evidence is proposed. Also
presented is a fast, direct method that does not employ stereo disparity mapping. We compared our algorithm's
detection of terrain drop-offs with time-code data from human observers viewing the same video clips in stereoscopic
3D. The algorithm detected terrain drop-offs an average of 9 seconds sooner, or 12m farther, than the human
observers. This suggests that passive image-based hyperstereo machine-vision may be useful as an early warning
system for off-road mobility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Designing and manufacturing of wide-angle optical systems for application in helmet mount displays are considered.
Specific requirements are shown and analyzed, including: minimization of weight and size, achievement of the big field
of vision, convenience of allocation on a helmet and others. Key element of HMD is combiner. Issues of designing
spectral and polarizing combiners are considered. We proposed to use the synthesized volume holograms as a spectral
combiner. Optical and operational properties of holographic optical elements based on volume holograms and
synthesized holograms, were analyzed and compared. Research of color distortions external space at vision through
combiner was carried out. Problems of optimum design of lighting system for LCOS matrixes based on high power LED
are considered as well as issues of synthesis and optimization relay lens. Results of designing, the analysis and testing of
optical systems for HMD are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to significantly increased U.S. military involvement in deterrent, observer, security, peacekeeping and combat
roles around the world, the military expects significant future growth in the demand for deployable virtual reality
trainers with networked simulation capability of the battle space visualization process.
The use of HMD technology in simulated virtual environments has been initiated by the demand for more effective
training tools. The AHMD overlays computer-generated data (symbology, synthetic imagery, enhanced imagery)
augmented with actual and simulated visible environment. The AHMD can be used to support deployable
reconfigurable training solutions as well as traditional simulation requirements, UAV augmented reality, air traffic
control and Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance
(C4ISR) applications. This paper will describe the design improvements implemented for production of the AHMD System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The side mounting of the night-vision sensors on some helmet-mounted systems creates a situation of hyperstereopsis in
which the binocular cues available to the operator are exaggerated such that distances around the point of fixation are
increased. For a moving surface approaching the observer, the increased apparent distance created by hyperstereopsis
should result in greater apparent speed of approach towards the surface and so an operator will have the impression they
have reached the surface before contact actually occurs. We simulated motion towards a surface with hyperstereopsis
and compared judgements of time to contact with that under normal stereopsis as well as under binocular viewing
without stereopsis. We simulated approach of a large, random-field textured and found that time to contact estimates
were shorter under the hyperstereoscopic condition than those under normal stereo and no stereo, indicating that
hyperstereopsis may cause observers to underestimate time to contact leading operators to undershoot the ground plane
when landing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image quality of night vision goggles is often expressed in terms of visual acuity, resolution or modulation transfer function. The primary reason for providing a measure of image quality is the underlying assumption that the image quality metric correlates with the level of visual performance that one could expect when using the device, for example, target detection or target recognition performance. This paper provides a theoretical analysis of the relationships between these three image quality metrics: visual acuity, resolution and modulation transfer function. Results from laboratory and field studies were used to relate these metrics to visual performance. These results can also be applied to non-image intensifier based imaging systems such as a helmet-mounted display coupled to an imaging sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Standard black and white printed targets have been used for numerous vision related experiments, and are ideal with
respect to contrast and spectral uniformity in the visible and near-infrared (NIR) regions of the electromagnetic (EM)
spectrum. However, these targets lack the ability to refresh, update, or perform as a real-time, dynamic stimulus. This
impacts their ability to be used in various standard vision performance measurement techniques. Emissive displays, such
as a LCD's, possess some of the attributes printed targets lack, but come with a disadvantage of their own: LCD's lack
the spectral uniformity of printed targets, making them of debatable value for presenting test targets in the near and short
wave infrared regions of the spectrum. Yet a new option has recently become viable that may retain favorable attributes
of both of the previously mentioned alternatives. The electrophoretic ink display is a dynamic, refreshable, and easily
manipulated display that performs much like printed targets with respect to spectral uniformity. This paper will compare
and contrast the various techniques that can be used to measure observer visual performance through night vision devices
and imagers - focusing on the visible to infrared region of the EM spectrum. Furthermore, it will quantify the
electrophoretic ink display option, determining its advantages and situations that it would be best suited for.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Georgia Tech Research Institute is currently developing a device to demonstrate a hands-free focus technology for
head-mounted night vision sensors. The demonstrator device will integrate a computational imaging technique that
increases depth of field with a digital night vision sensor. The goal of the demonstrator is to serve as a test bed for
evaluating the critical performance/operational parameters necessary for the hands-free focus technology to support
future tactical night vision concepts of operation. This paper will provide an overview of the technology studies and
design analyses that have been performed to date as well as the current state of the demonstrator design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Expected temporal effects in a night vision goggle (NVG) include the fluorescence time constant, charge depletion at high signal levels, the response time of the automatic gain control (AGC) and other internal modulations in the NVG. There is also the possibility of physical damage or other non-reversible effects in response to large transient signals. To study the temporal behaviour of an NVG, a parametric Matlab model has been created. Of particular interest in the present work was the variation of NVG gain, induced by its automatic gain control (AGC), after a short, intense pulse of light. To verify the model, the reduction of gain after a strong pulse was investigated experimentally using a simple technique. Preliminary laboratory measurements were performed using this technique. The experimental methodology is described, along with preliminary validation data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Night vision devices (NVDs) or night-vision goggles (NVGs) based on image intensifiers improve nighttime visibility
and extend night operations for military and increasingly civil aviation. However, NVG imagery is not equivalent to
daytime vision and impaired depth and motion perception has been noted. One potential cause of impaired perceptions
of space and environmental layout is NVG halo, where bright light sources appear to be surrounded by a disc-like halo.
In this study we measured the characteristics of NVG halo psychophysically and objectively and then evaluated the
influence of halo on perceived environmental layout in a simulation experiment. Halos are generated in the device and
are not directly related to the spatial layout of the scene. We found that, when visible, halo image (i.e. angular) size was
only weakly dependent on both source intensity and distance although halo intensity did vary with effective source
intensity. The size of halo images surrounding lights sources are independent of the source distance and thus do not obey
the normal laws of perspective. In simulation experiments we investigated the effect of NVG halo on judgements of
observer attitude with respect to the ground during simulated flight. We discuss the results in terms of NVG design and
of the ability of human operators to compensate for perceptual distortions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasing use of lasers on the modern battlefield may necessitate the wear of laser eye protection devices (LEPDs) by warfighters. Unfortunately, LEPDs that protect against visible laser wavelengths often reduce overall light transmittance and a wearer's vision can be degraded, especially in low light conditions. Wearing night vision goggles (NVGs) provides laser eye protection behind the goggles, but NVGs do not block lasers that might enter the eye around the NVGs. Therefore, LEPDs will be worn under NVGs. People wearing NVGs look below the NVGs to read displays and for other near vision tasks. This effort involved determining the effects of wearing variable density filters on vision in low light conditions, with and without the presence of a simulated head-down display (HDD). Each subject's visual acuity was measured under moonlight illumination levels while wearing neutral density filters and LEPDs. Similar measurements of the subjects' visual detection thresholds, both on and off-axis, were made. Finally, the effects of wearing variable density filters on visual acuity on the HDD were determined. Wearing variable density filters in low-light conditions reduces visual acuity and detection. The presence of the HDD reduced acuity slightly through variable density filters but. the HDD had no effect on on-axis detection and actually improved off-axis detection. The reasons for this final finding are unclear.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the first two decades of the fielding of the monocular helmet-mounted display (HMD) flown in the U.S. Army's AH-64 Apache attack helicopter, a number of studies reported a significant incidence of physical visual symptoms and illusions. In 2004, a similar study looked at the presence of these complaints during the first combat phase of Operation Iraqi Freedom (OIF). The study found a general trend of a reduced frequency of complaints associated with the AH-64's HMD. A follow-up study has been conducted to validate this downward trend and to investigate the impact the shift in mission role of the AH-64 from one of open-field tank hunter to one of close-quarter urban combat. Thirty-eight AH-64D pilots were asked to complete a survey questionnaire that solicited data about the presence and frequency of the visual complaints reported in previous studies. Data for physical visual symptoms and static and dynamic illusions were found not to be significantly different from frequencies reported in the previous OIF study. Both OIF studies reported headache as the prominent physical complaint with height judgment and slope estimation as the most frequently reported static illusions and with undetected drift and faulty closure judgment as the two most frequently reported dynamic illusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of an advanced ground soldier's integrated headgear system for the Army's Future Force Warrior
Program passed a major milestone during 2006. Field testing of functional headgear systems by small combat units
demonstrated that the headgear capabilities were mature enough to move beyond the advanced technology
demonstration (ATD) phase. This paper will describe the final system with test results from the three field exercises and
will address the strengths and weaknesses of the headgear system features, head mounted sensors, displays and sensor
fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous research has demonstrated that a Head-Up Display (HUD) can be used to enable more capacity and safer
aircraft surface operations. This previous research also noted that the HUD exhibited two major limitations which
hindered the full potential of the display concept: 1) the monochrome HUD format; and, 2) a limited, fixed field of
regard. Full-color Head Worn Displays (HWDs) with very small sizes and weights are emerging to the extent that this
technology may be practical for commercial and business aircraft operations. By coupling the HWD with a head tracker,
full-color, out-the-window display concepts with an unlimited field-of-regard may be realized to improve efficiency and
safety in surface operations. A ground simulation experiment was conducted at NASA Langley to evaluate the efficacy
of head-worn display applications which may directly address the limitations of the HUD while retaining all of its
advantages in surface operations. The simulation experiment used airline crews to evaluate various displays (HUD,
HWD) and display concepts in an operationally realistic environment by using a Chicago, O'Hare airport database. The
results pertaining to the implications of HWDs for commercial business and transport aviation applications are presented
herein. Overall HWD system latency was measured and found to be acceptable, but not necessarily optimal. A few
occurrences of simulator sickness were noted while wearing the HWD, but overall there appears to be commercial pilot
acceptability and usability to the concept. Many issues were identified which need to be addressed in future research
including continued reduction in user encumbrance due to the HWD, and improvement in image alignment, accuracy,
and boresighting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Helicopters are widely used in daytime forest fire suppression, conducting diverse tasks such as spotting, re-supply,
medical evacuation and airborne delivery. However, they are not used at night for forest fire suppression operations.
There would be many challenges when operating in the vicinity of forest fires at night, including scene obscuration from
smoke and dynamic changes in lighting conditions. There is little data on the use of Night Vision Goggles (NVGs) for
airborne forest fire suppression. The National Research Council of Canada (NRC), in collaboration with the Ontario
Ministry of Natural Resources (OMNR), performed a preliminary flight test to examine the use of NVGs while operating
near forest fires. The study also simulated limited aspects of night time water bucketing. The preliminary observations
from this study suggest that NVGs have potential to improve the safety and efficiency of airborne forest fire suppression,
including forest fire perimeter mapping and take-off and landing in the vicinity of open fires. NVG operations at some
distance from the fire pose minimal risk to flight, and provide an enhanced capability to identify areas of combustion at
greater distances and accuracy. Closer to the fire, NVG flight becomes more risk intensive as a consequence of a
reduction in visibility attributable to the adverse effects on NVG performance of the excess radiation and smoke emitted
by the fire. The preliminary results of this study suggest that water bucketing at night is a difficult operation with
elevated risk. Further research is necessary to clarify the operational limitations and implementation of these devices in
forest fire suppression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of Helmet-Mounted Display/Tracker (HMD/Ts) is becoming widespread for air-to-air, within visual range target acquisition for a tactical fighter pilot. HMD/Ts provide the aircrew with a significant amount of information on the helmet, which reduces the burden of the aircrew from having to continually look down in the cockpit to receive information. HMD/Ts allow the aircrew to receive flight and targeting information regardless of line-of-sight, which should increase the aircrew's situation awareness and mission effectiveness. Current technology requires that a pilot wearing a Helmet Mounted Display/Tracker be connected to the aircraft with a cable. The design of this cable is complex, costly, and its use can decrease system reliability. Most of the problems associated with the use of cable can be alleviated by using wireless transmission for all signals. This will significantly reduce or eliminate the requirements of the interconnect cable/connector reducing system complexity, and cost, and enhancing system safety. A number of wireless communication technologies have been discussed in this paper and the rationale for selecting one particular technology for this application has been shown. The problems with this implementation and the direction of the future effort are outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.