This paper describes the implementation of displaying real-time processed LiDAR 3D data in a DVE pilot assistance system. The goal is to display to the pilot a comprehensive image of the surrounding world without misleading or cluttering information. 3D data which can be attributed, i.e. classified, to terrain or predefined obstacle classes is depicted differently from data belonging to elevated objects which could not be classified. Display techniques may be different for head-down and head-up displays to avoid cluttering of the outside view in the latter case. While terrain is shown as shaded surfaces with grid structures or as grid structures alone, respectively, classified obstacles are typically displayed with obstacle symbols only. Data from objects elevated above ground are displayed as shaded 3D points in space. In addition the displayed 3D points are accumulated over a certain time frame allowing on the one hand side a cohesive structure being displayed and on the other hand displaying moving objects correctly. In addition color coding or texturing can be applied based on known terrain features like land use.
This paper compares the capabilities achievable with a pure database driven DVE system (NIAG class 4 system) with the capabilities of a database system fused with sensor information (NIAG class 2 system). Both systems will present the same 3D conformal symbology. To achieve terrain conformal representation with operational navigation systems and databases, specific compensation techniques are required. This applies especially for the pure database system. The sensor database fusion system on the other hand relies mainly on relative accuracies simplifying the required compensation techniques at the cost of additional sensors and fusion algorithms. Both system configurations were flight-tested on a test helicopter. The test results, specifics and basic limitations will be discussed and compared.
The paper presents results and findings of flight tests of the Airbus Defence and Space DVE system SFERION performed at Yuma Proving Grounds. During the flight tests ladar information was fused with a priori DB knowledge in real-time and 3D conformal symbology was generated for display on an HMD. The test flights included low level flights as well as numerous brownout landings.
The paper discusses specifics of high resolution 3D sensor systems employed in helicopter DVE support systems and the consequences for the resulting HMI. 3D sensors have a number of specifics making them a cornerstone for helicopter pilot support or pilotage systems intended for use in DVE. Retrieving depth information gives specific advantages over 2D imagers. On the other hand certain technology and physics inherent specifics require a more elaborate visualization procedure compared to 2D image visualization. The goal of all displayed information has to be to reduce pilots workload in DVE operations. Therefore especially for displaying the processed information on an HMD as 3D conformal data requires thorough HMI considerations.
Low level helicopter operations in Degraded Visual Environment (DVE) still are a major challenge and bear the risk of
potentially fatal accidents. DVE generally encompasses all degradations to the visual perception of the pilot ranging
from night conditions via rain and snowfall to fog and maybe even blinding sunlight or unstructured outside scenery.
Each of these conditions reduce the pilots’ ability to perceive visual cues in the outside world reducing his performance
and finally increasing risk of mission failure and accidents, like for example Controlled Flight Into Terrain (CFIT). The
basis for the presented solution is a fusion of processed and classified high resolution ladar data with database
information having a potential to also include other sensor data like forward looking or 360° radar data. This paper
reports on a pilot assistance system aiming at giving back the essential visual cues to the pilot by means of displaying 3D
conformal cues and symbols in a head-tracked Helmet Mounted Display (HMD) and a combination with synthetic view
on a head-down Multi-Function Display (MFD). Each flight phase and each flight envelope requires different symbology
sets and different possibilities for the pilots to select specific support functions. Several functionalities have been
implemented and tested in a simulator as well as in flight. The symbology ranges from obstacle warning symbology via
terrain enhancements through grids or ridge lines to different waypoint symbols supporting navigation. While some
adaptations can be automated it emerged as essential that symbology characteristics and completeness can be selected by
the pilot to match the relevant flight envelope and outside visual conditions.