Night Vision Imaging Systems technology is advancing at a rapid pace. These advances can be broadly divided in two
distinct categories; performance and data management. There is an encouraging trend towards higher sensitivity, better
resolution, and lower power consuming devices. These improvements, coupled with the shift from analog to digital data
output, promise to provide a powerful night vision device. Given a digital system, the data can be managed to enhance
the pilot’s view (image processing), overlay data from multiple sensors (image fusion), and send data to remote locations
for analysis (image sharing).
The US Air Force Research Laboratory (AFRL) has an active program to introduce a helmet mounted digital imaging
system that extends the detection range from the near infrared (NIR) band to the short-wave infrared (SWIR) band.
Aside from the digital output, part of the motivation to develop a SWIR imaging system includes the desire to exploit the
SWIR ambient night glow spectrum, see through some levels of fog and haze, and use a robust sensor technology
suitable for 24 hours per day imaging.
Integrating this advanced SWIR imaging system into a cockpit presents some human factor issues. Light emitted from
illuminated instruments may hinder the performance of the imaging system, reducing the pilot’s ability to detect lowvisible
objects at night. The transmission of light through cockpit transparencies and through the atmosphere may also
impact performance. In this paper we propose a model that establishes cockpit lighting SWIR radiance limits, much like
MIL-STD-3009 specifies NVIS radiance limits for NVGs. This model is the culmination of a two year program
sponsored by AFRL.
This document presents the study and test carried out for the development of an innovative algorithm designed to create
a panoramic representation of the scene scanned by observation systems operating with passive sensors.
The purpose of the algorithm is to represent 360° of scene using staring sensors mounted on stabilized or semi-stabilized platforms, without requirements on video output, both in terms of the transmission format and in terms of frame rate. The algorithm is real-time and does not require step-and-stare technique or special systems to scan the scene. The architecture of the algorithm requires a very low computational cost for the electronics contained in a Multi-Functional Display (MDP) used in defense applications. The algorithm has been implemented and tested on the JANUS NAVAL system, where the results were very satisfactory. Today, a patent is pendent.
In this paper, we introduce a user interface called the “Threat Chip Display” (TCD) for rapid human-in-the-loop
analysis and detection of “threats” in high-bandwidth imagery and video from a list of “Items of Interest” (IOI), which
includes objects, targets and events that the human is interested in detecting and identifying. Typically some front-end
algorithm (e.g., computer vision, cognitive algorithm, EEG RSVP based detection, radar detection) has been applied to
the video and has pre-processed and identified a potential list of IOI. The goal of the TCD is to facilitate rapid analysis
and triaging of this list of IOI to detect and confirm actual threats. The layout of the TCD is designed for ease of use, fast
triage of IOI, and a low bandwidth requirement. Additionally, a very low mental demand allows the system to be run for
extended periods of time.
Recent years have seen a rise in sophisticated navigational positioning techniques. Starting from classical GPS,
differential GPS, ground-based augmentation, and raw data submission have opened possibilities for high precision
lateral positioning for beyond what was thinkable before. This yields new perspectives for technologies like
ACAS/TCAS, by enabling last-minute lateral avoidance as a supplement to the established vertical avoidance
Working together with Ohio University’s Avionics Department, DLR has developed and tested a set of displays for
situational awareness and lateral last-minute avoidance in a collision situation, implementing some state-of-the art ideas
in collision avoidance. The displays include the possibility to foresee the hazard zone of a possible intruder and thus
avoid that zone early. The displays were integrated into Ohio University’s experimental airplane, and a flight experiment
was conducted to make a first evaluation of the applicability. The tests were carried out in fall 2012.
We will present the principal architecture of the displays and detail the implementation into the flight carrier.
Furthermore, we will give first results of the displays’ performance.
LCOS (Liquid Crystal on Silicon) is a reflective microdisplay technology based on a single crystal silicon pixel
controller backplane which drives a liquid crystal layer. Using standard CMOS processes, microdisplays with
extremely small pixels, high fill factor (pixel aperture ratio) and low fabrication costs are created. Recent advances
in integrated circuit design and liquid crystal materials have increased the application of LCOS to displays and other
optical functions. Pixel pitch below 3 μm, resolution of 8K x 4K, and sequential contrast ratios of 100K:1 have been
achieved. These devices can modulate light spatially in amplitude or phase, so they act as an active dynamic optical
element. Liquid crystal materials can be chosen to modulate illumination sources from the UV through far IR. The
new LCOS designs have reduced power consumption to make portable displays and viewing elements more viable.
Also innovative optical system elements including image and illumination waveguides and laser illuminators have
been combined into LCOS based display systems for HMD, HUD, projector, and image analysis/surveillance direct
view monitor applications. Dynamic displays utilizing the fine pixel pitch and phase mode operation of LCOS are
advancing the development of true holographic displays. The paper will review these technology advances of LCOS
and the display applications and related system implementation.
Precision Guided Firearms (PGFs) employ target tracking, a Heads-Up Display, and advanced fire control technology to
amplify shooting precision at long range by eliminating the most common sources of shooter error, including aim,
trigger jerk, and shot setup miscalculation. Regardless of skill level or experience, PGFs significantly increase first shot
success probability when compared to traditional technology, even at extreme ranges of 1,200 yards or more. More than
just a scope, PGFs are fully integrated systems based on standard caliber bolt action or semi-automatic rifles with a
Networked Tracking Scope, Guided Trigger and precision conventional ammunition. Onboard wireless technology
allows PGFs to connect with local and wide area networks to deliver voice, video and data to mobile devices and various
communication networks. These technologies allow shooters to be more accurate, engage multiple targets at unknown
ranges quickly, track and engage moving targets, and communicate via command and control networks.
Kopin’s recently introduced low-power “Jewel Module” family of plug-and-play integrated AMLCD microdisplay
modules are fully-tested, off-the-shelf assemblies that can be easily integrated into customer products without the need
for an expensive application-specific development. The “Jewel Module” is the culmination of many years of technology
advancement that has reduced the size and power for all of the elements of the display system: microdisplay, LED
backlight, display driver ASIC, video FPGA, heater and display controller. This paper presents the performance
characteristics of both current and planned modules with display resolutions from 640x480 to 1280x1024 as well as
development roadmap. Applications of the “Ruby Module” with SVGA microdisplay are described with examples of its
integration into display system products.
The application of optical waveguides to Head Up Displays (HUD) is an enabling technology which solves the critical
issues of volume reduction (including cockpit intrusion) and mass reduction in an affordable product which retains the
high performance optical capabilities associated with today’s generation of digital display based HUDs. Improved
operability and pilot comfort is achieved regardless of the installation by virtue of the intrinsic properties of optical
waveguides and this has enabled BAE Systems Electronic Systems to develop two distinct product streams for
glareshield and overhead HUD installations respectively.
This paper addresses the design drivers behind the development of the next generation of Head Up Displays and their
compatibility with evolving cockpit architectures and structures. The implementation of large scale optical waveguide
combiners capable of matching and exceeding the display performances normally only associated with current digital
display sourced HUDs has enabled BAE Systems Electronic Systems to solve the volume and installation challenges of
the latest military and civil cockpits with it’s LiteHUD® technology.
Glareshield mounted waveguide based HUDs are compatible with the trend towards the addition of Large Area Displays
(LAD) in place of the traditional multiple Head Down Displays (HDD) within military fast jet cockpits. They use an
“indirect view” variant of the display which allows the amalgamation of high resolution digital display devices with the
inherently small volume and low mass of the waveguide optics. This is then viewed using the more traditional
technology of a conventional HUD combiner. This successful combination of technologies has resulted in the LPHUD
product which is specifically designed by BAE Systems Electronic Systems to provide an ultra-low profile HUD which
can be installed behind a LAD; still providing the level of performance that is at least equivalent to that of a conventional
large volume glareshield mounted HUD.
In many current Business Jet and Air Transport cockpits overhead mounted HUDs employ a conventional optical
combiner to relay the display from a separate projector to the pilot’s eyes. In BAE Systems’ Electronic Systems QHUDTM
configuration this combiner is replaced by the waveguide and the bulky, intrusive overhead projector completely
eliminated. The result is a significant reduction in equipment volume and mass and a much greater head clearance
combined with a substantially larger Head Motion Box. This latter feature is a fundamental outturn of waveguide optical
solutions which removes the restrictions on pilot eye positioning associated with current conventional systems.
LiteHUD®, developed by BAE Systems, Electronic Systems achieves equivalent optical performance to in-service HUDs
for less cost, mass and volume.
Throughout the development of the automotive industry, supporting activities related with driving has been material of
analysis and experimentation, always seeking new ways to achieve greater safety for the driver and passengers. With the
purpose of contributing to this topic, in order to contribute to this subject, this paper summarizes from past research
experiences the use of Head-Up Display systems applied to the automobile industry, covering it from two main points of
discussion: the first one, from a technical point of view, in which the main principles of optical design associated with a
moderate-cost experimental set up are brought out; and the second one, an operational approach where an applied
driving graphical interface is presented. Up to now, the results suggest that the experimental set up here discussed could
be adaptable to any automobile vehicle, but it is needed further research and investment.
There are a host of helmet and head mounted displays, flooding the market place with displays which provide what is
essentially a mobile computer display. What sets aviators HMDs apart is that they provide the user with accurate
conformal information embedded in the pilots real world view (see through display) where the information presented is
intuitive and easy to use because it overlays the real world (mix of sensor imagery, symbolic information and synthetic
imagery) and enables them to stay head up, eyes out, - improving their effectiveness, reducing workload and improving
Such systems are an enabling technology in the provision of enhanced Situation Awareness (SA) and reducing user
workload in high intensity situations. Safety Is Key; so the addition of these HMD functions cannot detract from the
aircrew protection functions of conventional aircrew helmets which also include life support and audio
These capabilities are finding much wider application in new types of compact man mounted audio/visual products
enabled by the emergence of new families of micro displays, novel optical concepts and ultra-compact low power
processing solutions. This papers attempts to capture the key drivers and needs for future head mounted systems for