This PDF file contains the front matter associated with SPIE Proceedings Volume 9471, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Digital Night Vision sensor technology has the potential to provide significant new night vision capabilities for military aviators. Before new capabilities can fielded, however, the combined sensor-processor-display chain must achieve a level of night vision performance on-par with current-generation photo-multiplier tube (PMT) - based night vision goggles across the entire range of lighting conditions. This paper provides an overview of Rockwell Collins’s design of an ISIE-11 based Digital Night Vision Goggle (DNVG) intended to eventually replace traditional PMT goggles in a variety of military aviation and infantry applications. It also reports on an initial series of evaluations performed by Boeing aircrew in laboratory and flight environments. Laboratory lighting levels ranged from "overcast starlight" to "full moon", and airborne evaluations in a light aircraft were conducted under "starlight" and "half-moon" conditions at a realistic tactical altitude. Each evaluation provided a direct comparison between a modern PMT NVG and the DNVG prototype. Inputs from the flight evaluation were subsequently implemented in DNVG image processing software.
Degraded visual environments create dangerous conditions for aircraft pilots due to loss of situational awareness and/or ground reference, which can result in accidents during navigation or landing. Imaging in millimeter wave spectral bands offers the ability to maintain pilot's situational awareness despite DVE with a "see-through" imaging modality. Millimeter waves exhibit low atmospheric attenuation as well as low scattering loss from airborne particulates, e.g. blowing sand, dust, fog, and other visual obscurants. As such, Phase Sensitive Innovations (PSI) has developed a passive, real-time mmW imager to mitigate brownout dangers for rotorcraft. The imager consists of a distributed aperture array with conversion of detected mmW signals to optical frequencies for processing and image formation. Recently we performed operationally representative flight testing of our sensor while imaging various natural and manmade objects. Here we present imagery collected during these tests as it confirms the performance of the sensor technology and illustrates phenomenology encountered in the mmW spectrum.
The paper discusses recent results of flight tests performed with the Airbus Defence and Space ladar system at Yuma Proving Grounds. The ladar under test was the SferiSense® system which is in operational use as an in-flight obstacle warning and avoidance system on the NH90 transport helicopter. Just minor modifications were done on the sensor firmware to optimize its performance in brownout. Also a new filtering algorithm fitted to segment dust artefacts out of the collected 3D data in real-time was employed. The results proved that this ladar sensor is capable to detect obstacles through brownout dust clouds with a depth extending up to 300 meters from the landing helicopter.
The paper presents results and findings of flight tests of the Airbus Defence and Space DVE system SFERION performed at Yuma Proving Grounds. During the flight tests ladar information was fused with a priori DB knowledge in real-time and 3D conformal symbology was generated for display on an HMD. The test flights included low level flights as well as numerous brownout landings.
An effective Degraded Visual Environment (DVE) solution integrates and displays data from a number of complimentary sources. The visualization backbone is a Synthetic Vision System (SVS) using an up-to-date synthetic terrain database with high resolution elevation data for the immediate DVE area, enhanced by recent high-resolution photo imagery of the area draped over the 3D terrain. Effectiveness is further enhanced by displaying 3D structure and vegetation models derived from on-board sources such as cultural feature and obstacle databases, or in real time from off-board sources such as data-linked traffic, reconnaissance, or forward-observer reports. The full solution incorporates real-time sensor data processed to identify vertical obstacles (such as towers), horizontal obstacles (wires), and three-dimensional obstacles (buildings), positioned on the synthetic terrain in their detected positions. The sensors may be on board the aircraft, but such interpreted data may also be relayed from a remote source that is viewing the DVE area. Actual sensor imagery can be merged with the synthetic view so that the sensor imagery and underlying SVS terrain are viewed together. Lastly, how the fused data is presented affects the aircrew’s ability to intuitively grasp the situation in the DVE area. An external wingman view shows the situation all around the aircraft rather than just in front. When the SVS is integrated with a helmet-mounted display system, it is possible to provide virtual "x-ray" vision by providing the fused synthetic view even when the actual view is obscured by the airframe.
Modern Enhanced and Synthetic Vision Systems (ESVS) usually incorporate complex 3D displays, for example, terrain visualizations with color-coded altitude, obstacle representations that change their level of detail based on distance, semi-transparent overlays, dynamic labels, etc. All of these elements can be conveniently implemented by using a modern scene graph implementation. OpenSceneGraph offers such a data structure. Furthermore, OpenSceneGraph includes a broad support for industry-standard file formats, so 3D data and models from other applications can be used. OpenSceneGraph has a large user community and is driven by open source development. Thus a selection of visualization techniques is available and often solutions for common problems can be found easily in the community’s discussion groups. On the other side, documentation is sometimes outdated or nonexistent. We investigate which ESVS applications can be realized using OpenSceneGraph and on which platforms this is possible. Furthermore, we take a look at technical and license limitations.
This paper describes the results of an HMI development and ground tests related to the combination of distributed aperture imagery with a lidar based fused 3D symbology. The combined system was evaluated in a simulator environment and on a vehicle based ground demonstrator for its capability to significantly mitigate DVE conditions, especially night level 5.
Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird’s eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot’s mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.
Helicopter operations require a well-controlled and minimal lateral drift shortly before ground contact. Any lateral speed exceeding this small threshold can cause a dangerous momentum around the roll axis, which may cause a total roll-over of the helicopter. As long as pilots can observe visual cues from the ground, they are able to easily control the helicopter drift. However, when visibility is reduced or even obscured, e.g. due to night, fog, or dust, this controllability diminishes. Therefore helicopter operators could benefit from some type of "drift indication" that mitigates the influence of degraded visual environment.
With continuous technology advancement helmet-mounted displays (HMD) will soon become a spreading technology. At the present state HMDs are still expensive and are mostly reserved for military operations. The symbol sets fielded are designed for well trained staff and special missions. Investigating some of those symbol sets revealed that lateral drift indication doesn’t live for what it promises. With practice these symbol sets assist well during the approach but lack of proper cues once the helicopter hovers. Present developments also focus on three dimensional symbol sets that are conformal with the environment. All of them present a virtual landing pad. These types of see-through synthetic vision displays allow several new methods of information visualization.
Generally humans derive ego motion by the perceived environmental optical flow. To enhance this perception a pattern motion was implemented in a conformal HMD symbol set which amplifies the measured own ship movement. The paper presents results from an experimental study with 18 pilots from civil and military operators. In this study the forward landing zone border was replaced by an animated dashed line for indicating the amplified ego motion.
The availability of new technologies for helmet- and head-mounted displays facilitates the design of innovative cockpit layouts like a completely virtual flight deck. After the introduction of the so-called "glass cockpit", where formerly mechanical instruments have been converted digitally onto large panel screens, the virtual flight deck could be the logical next step into the future. Obviously, such a concept will save installation cost of conventional display hardware. Furthermore, and probably of greater importance, stressful and time-consuming accommodation changes for the pilots' eyes between outside- and inside-view can be avoided.
During the last months we have developed a concept for virtual cockpit instrumentation. Our implementation is based onto the JedEye™, a monochrome green, "looking-through" HMD, which offers a resolution of more than HD-TV, good enough to show detailed information as on presently installed head-down instruments. Our approach augments our latest "3D helicopter landing symbol format" with basic virtual instruments (PFD, ND, knee-board) in the near field of the cockpit environment in "no-window" areas. Besides, we have implemented a "drag and drop" mechanism, which enables pilots to arrange instrumentation on their personal preference. Tests in our Generic Cockpit Simulator (GECO) are currently conducted. As first pilots’ feedback show, our concept offers a great potential to be introduced into the future flight deck.
The paper discusses specifics of high resolution 3D sensor systems employed in helicopter DVE support systems and the consequences for the resulting HMI. 3D sensors have a number of specifics making them a cornerstone for helicopter pilot support or pilotage systems intended for use in DVE. Retrieving depth information gives specific advantages over 2D imagers. On the other hand certain technology and physics inherent specifics require a more elaborate visualization procedure compared to 2D image visualization. The goal of all displayed information has to be to reduce pilots workload in DVE operations. Therefore especially for displaying the processed information on an HMD as 3D conformal data requires thorough HMI considerations.
The Navy and Marine Corps will increasingly need to operate unmanned air vehicles from ships at sea. Fused multi-sensor systems are desirable to ensure these operations are highly reliable under the most demanding at-sea conditions, particularly in degraded visual environments. The US Navy Sea-Based Automated Launch & Recovery System (SALRS) program aims at enabling automated/semi-automated launch and recovery of sea-based, manned and unmanned, fixed- and rotary-wing naval aircraft, and to utilize automated or pilot-augmented flight mechanics for carefree shipboard operations. This paper describes the goals and current results of SALRS Phase 1, which aims at understanding the capabilities and limitations of various sensor types through sensor characterization, modeling, and simulation, and assessing how the sensor models can be used for aircraft navigation to provide sufficient accuracy, integrity, continuity, and availability across all anticipated maritime conditions.
Effective reconnaissance, surveillance and situational awareness, using dual band sensor systems, require the extraction, enhancement and fusion of salient features, with the processed video being presented to the user in an ergonomic and interpretable manner. HALO™ is designed to meet these requirements and provides an affordable, real-time, and low-latency image fusion solution on a low size, weight and power (SWAP) platform. The system has been progressively refined through field trials to increase its operating envelope and robustness. The result is a video processor that improves detection, recognition and identification (DRI) performance, whilst lowering operator fatigue and reaction times in complex and highly dynamic situations. This paper compares the performance of HALO™, both qualitatively and quantitatively, with conventional blended fusion for operation in degraded visual environments (DVEs), such as those experienced during ground and air-based operations. Although image blending provides a simple fusion solution, which explains its common adoption, the results presented demonstrate that its performance is poor compared to the HALO™ fusion scheme in DVE scenarios.