Since their introduction by Kohonen Self Organizing Maps (SOMs) have been used in various forms for purposes
of surface reconstruction. They offer robust and fast approximations of manifold data from unstructured input
points while being modestly easy to implement. On the other hand SOMs have certain disadvantages when
used in a setup where sparse, reliable and spacial unbounded data occurs. For example, airborne Lidar sensors
generate a continuous stream of point data while flying above terrain. We introduce modifications of the SOM's
data structure to adapt it to unbounded data. Furthermore, we introduce a new variation of the learning rule
called rapid learning that is feasible for sparse but rather reliable data. We demonstrate examples where the
surroundings of an aircraft can be reconstructed in almost real time.
Radar simulation involves the computation of a radar response based on the terrain's normalized radar cross
section (RCS). In the past different models have been proposed for modeling the normalized RCS. While being
accurate in most cases they lack intuitive handling. We present a novel approach for computing the mean
normalized radar cross section for use in millimeter wave radar simulations based on Phong lighting. This allows
us to model radar power return in an intuitive way using categories of diffuse and specular reflections. The
model is computational more efficient than previous approaches while using only few parameters. Furthermore,
we give example setups for different types of terrain. We show that our technique can accurately model data
output from other approaches as well as real world data.
Feasibility of an EVS head-down procedure is examined that may provide the same operational benefits under low
visibility as the FAA rule on Enhanced Flight Visibility that requires the use of a head-up display (HUD). The main
element of the described EVS head-down procedure is the crew procedure within cockpit for flying the approach. The
task sharing between Pilot-Flying and Pilot-Not-Flying is arranged such that multiple head-up/head-down transitions can
be avoided. The Pilot-Flying is using the head-down display for acquisition of the necessary visual cues in the EVS
image. The pilot not flying is monitoring the instruments and looking for the outside visual cues.
This paper reports about simulation activities that complete a series of simulation and validation activities carried out in
the frame of the European project OPTIMAL. The results support the trend already observed after some preliminary
investigations. They suggest that pilots can fly an EVS approach using the proposed EVS head-down display with the
same kind of performance (accuracy) as they do with the HUD. There seems to be no loss of situation awareness. Further on, there is not significant trend that the use of the EVS head-down display leads to higher workload compared to the EVS HUD approach. In conclusion,
EVS-Head-Down may be as well a feasible option for getting extra operational credit under low visibility conditions.
To improve the situation awareness of an aircrew during poor visibility, different approaches emerged during the past
couple of years. Enhanced vision systems (EVS - based upon sensor images) are one of those. They improve situation
awareness of the crew, but at the same time introduce certain operational deficits. EVS present sensor data which might
be difficult to interpret especially if the sensor used is a radar sensor. In particular an unresolved problem of fast
scanning forward looking radar systems in the millimeter waveband is the inability to measure the elevation of a target.
In order to circumvent this problem effort was made to reconstruct the missing elevation from a series of images. This
could be described as a "Stereo radar"-attempt and is similar to the reconstruction using photography (angle-angle
images) from different viewpoints to rebuilt the depth information. Two radar images (range-angle images) with
different bank angles can be used to reconstruct the elevation of targets.
This paper presents the fundamental idea and the methods of the reconstruction. Furthermore, experiences with real data
from EADS's "HiVision" MMCW radar are discussed. Two different approaches are investigated: First, a fusion of
images with variable bank angles is calculated for different elevation layers and picture processing reveals identical
objects in these layers. Those objects are compared regarding contrast and dimension to extract their elevation. The
second approach compares short fusion pairs of two different flights with different nearly constant bank angles.
Accumulating those pairs with different offsets delivers the exact elevation.
Extending previous works by Doehler and Bollmeyer we describe a new implementation of an imaging radar
simulator. Our approach is based on using modern computer graphics hardware making heavy use of recent
technologies like vertex and fragment shaders. Furthermore, to allow for a nearly realistic image we generate
radar shadows implementing shadow map techniques in the programmable graphics hardware. The particular
implementation is tailored to imitate millimeter wave (MMW) radar but could be extended for other types of
radar systems easily.
Within its research project ADVISE-PRO (Advanced visual system for situation awareness enhancement − prototype,
2003 - 2006) that will be presented in this contribution, DLR has combined elements of Enhanced Vision and Synthetic
Vision to one integrated system to allow all low visibility operations independently from the infrastructure on ground.
The core element of this system is the adequate fusion of all information that is available on-board. This fusion process is
organized in a hierarchical manner. The most important subsystems are a) the sensor based navigation which determines
the aircraft's position relative to the runway by automatically analyzing sensor data (MMW, IR, radar altimeter) without
using neither (D)GPS nor precise knowledge about the airport geometry, b) an integrity monitoring of navigation data
and terrain data which verifies on-board navigation data ((D)GPS + INS) with sensor data (MMW-Radar, IR-Sensor,
Radar altimeter) and airport / terrain databases, c) an obstacle detection system and finally d) a consistent description of
situation and respective HMI for the pilot.
Enhanced Vision Systems (EVS) are currently developed with the goal to alleviate restrictions in airspace and airport capacity in low visibility conditions. Existing EVS-systems are based on IR-sensors although the penetration of bad weather (dense fog and light rain) by MMW-radar is remarkably better than in the infrared spectrum. But the quality of MMW radar is rather poor compared to IR images. However, the analysis of radar images can be simplified dramatically when simple passive radar retro-reflectors are used to mark the runway. This presentation is the third in a series of studies investigating the use of such simple landing aids. In the first study the feasibility of the radar PAPI concept was determined; the second one provided first promising human performance results in a low-fidelity simulation. The present study examined pilot performance, workload, situation awareness, and crew coordination issues in a high-fidelity simulation of 'Radar-PAPI' visual aids supporting a precision straight-in landing in low visibility (CAT-II). Simulation scenarios were completed in a fixed-base cockpit simulator involving six two-pilot flight-deck crews. Pilots could derive visual cues to correct lateral glide-path deviations from 13 pairs of runway-marking corner reflectors. Vertical deviations were indicated by a set of six diplane reflectors using intensity-coding to provide the PAPI categories needed for the correction of vertical deviations.
The study compared three display formats and associated crew coordination issues: (1) PF views a head-down B-scope display and switches to visual landing upon PNF's call-out that runway is in sight; (2) PF views a head-down C-scope display and switches to visual landing upon PNF's call-out that runway is in sight; (3) PF views through a head-up display (HUD) that displays primary flight guidance information and receives vertical and lateral guidance from PNF who views a head-down B-scope. PNF guidance is terminated upon PF's call-out that runway is in sight.
This contribution summarizes DLR's recent development of a considerable robust and reliable method to estimate the relative position of an aircraft with respect to a runway based on camera images only (TV, infrared or even PMMW radar). The special advantage of the proposed method is, that neither a calibrated camera (referring to focus length and mounting angles relative to the aircraft) is required, nor any knowledge of special points of the runway (3-D world coordinates and their 2-D identification within the image) is needed. The only reference to the 3-D world, which has to be known, is the width of the runway stripe. The proposed algorithm computes the relative height of the aircraft above the runway stripe and the lateral deviation from the centre line of the runway as well. Additionally, several image analysis procedures are presented which allow to detect the runway stripe by either grouping the asphalt/grass border lines, or by analyzing the alignment structure of runway lights. The proposed image analysis method fulfills real-time requirements and has been tested with several image sequences acquired with different types of IR-cameras.
Up to now most Enhanced Vision Systems have been based on IR-sensors. Although the penetration of bad weather (dense fog and light rain) by MMW-radar is remarkably better than in the infrared spectrum MMW sensors still have the disadvantage that radar data are often difficult to interpret. Therefore, it's not always possible for the pilot to obtain a reliable detection of runway structures within the radar images. However, prior field tests have shown that the installation of two different types of radar retro-reflectors along the runway can ease the image analysis task significantly and can provide the visual cues necessary to perform precision straight-in landings. A set of corner reflectors has proven suitable to mark the runway edges needed to adjust for lateral deviations and a set of diplane reflectors provided cues to maintain a 3-degree glide path descend.
The present study obtains first objective human performance data to examine the question how efficient pilots are in utilizing these visual cues. The study tested seven VFR and seven IFR-rated pilots and used a low-fidelity human-in-the-loop visual tracking task to simulate a straight-in landing. Pilots were required to detect the lateral and vertical tracking error based on the intensity-coded visual cues provided by the simulated radar images. The study compares two display conditions derived from different spatial arrangements of the diplane reflectors that signal the glide path angles. The first, the so-called "Radar-PAPI", was a horizontal row arrangement of four diplanes, and the second, the "Radar VASI", was a two-over-two arrangement of four diplanes. A third condition simulated the existing visual color coded PAPI landing aid and served as a baseline reference. Performance evaluation was based on the calculation of the root-mean-square error for both axis and subjective preference statements of the pilots.
DLR has set up a number of projects to increase flight safety and economics of aviation. Within these activities one field of interest is the development and validation of systems for pilot assistance in order to increase the situation awareness of the aircrew. All flight phases ('gate-to-gate') are taken into account, but as far as approaches, landing and taxiing are the most critical tasks in the field of civil aviation, special emphasis is given to these operations. As presented in previous contributions within SPIE's Enhanced and Synthetic Vision Conferences, DLR's Institute of Flight Guidance has developed an Enhanced Vision System (EVS) as a tool assisting especially approach and landing by improving the aircrew's situational awareness. The combination of forward looking imaging sensors (such as EADS's HiVision millimeter wave radar), terrain data stored in on-board databases plus information transmitted from ground or other aircraft via data link is used to help pilots handling these phases of flight especially under adverse weather conditions. A second pilot assistance module being developed at DLR is the Taxi And Ramp Management And Control - Airborne System (TARMAC-AS), which is part of an Advanced Surface Management Guidance and Control System (ASMGCS). By means of on-board terrain data bases and navigation data a map display is generated, which helps the pilot performing taxi operations. In addition to the pure map function taxi instructions and other traffic can be displayed as the aircraft is connected to TARMAC-planning and TARMAC-communication, navigation and surveillance modules on ground via data-link. Recent experiments with airline pilots have shown, that the capabilities of taxi assistance can be extended significantly by integrating EVS- and TARMAC-AS-functionalities. Especially an extended obstacle detection and warning coming from the Enhanced Vision System increases the safety of ground operations. The presented paper gives an overview regarding those two assistance systems and discusses possible concepts and the potential of an integrated system with respect to taxi guidance operations.
Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
The acquisition of navigation data is an important upgrade of enhanced vision (EV) systems. E.g. the position from an aircraft relative to the runway during landing approaches has to be derived from data of the EV sensors directly, if no ILS or GPS navigation information is available. Due to its weather independence MMW radar plays an important role among possible EV sensors. Generally, information about the altitude of the aircraft relative to a target ahead (the runway) is not available within radar data. A common approach to overcome this so called vertical position problem is the use of the Flat Earth Assumption, i.e. the altitude above the runway is assumed to be the same as the actual altitude of the aircraft measured by the radar altimeter. Another approach known from literature is to combine different radar images from different positions similar to stereo and structure from motion approaches in computer vision. In this paper we present a detailed investigation of the latter approach. We examine the correspondence problem with regard to the special geometry of radar sensors as well as the principle methodology to estimate 3D information from different rage angle measurements. The main part of the contribution deals with the question of accuracy: What accuracy can be obtained? What are the influences of factors like vertical beam width range and angular resolution of the sensor relative transformation between different sensor locations, etc. Based on this investigation, we introduce a new approach for vertical positioning. As a special benefit this methods delivers a measurement of validity which allows the judgement of the estimation of the relative vertical position from sensor to target. The performance of our approach is demonstrated with both simulated data and real data acquired during flight tests.
Comprehensive situation awareness is very important for aircrews to handle complex situations like landing approaches or taxiing, especially under adverse weather conditions. Thus, DLR's Institute of Flight Guidance is developing an Enhanced Vision System that uses different forward looking imaging sensors to gain information needed for executing given tasks. Furthermore, terrain models, if available, can be used to control as well as to support the sensor data processing. Up to now, the most promising sensor due to its lowest weather dependency compared to other imaging sensors seems to be a 35 GHz MMW radar from DASA, Ulm, which provides range data with a frame rate of about 16 Hz. In previous contributions first experimental results of our radar data processing have been presented. In this paper we deal with radar data processing in more detail. Automatic extraction of relevant features for landing approaches and taxiing maneuvers will be focused. In the first part of this contribution we describe a calibration of the MMW radar which is necessary to determine the exact relationship between raw sensor data (pixels) and world coordinates. Furthermore, a calibration gives us an idea how accurate features can be located in the world. The second part of this paper is about our approach for automatically extracting features relevant for landing and taxiing. Improvements of spatial resolution as well as noise reduction are achieved with a multi frame approach. The correspondence of features in different frames is found with the aid of navigation sensors like INS or GPS, but can also be done by tracking methods. To demonstrate the performance of our approach we applied the extraction method on simulated data as well as on real data. The real data have been acquired using a test van and a test aircraft, both equipped with a prototype of the imaging MMW Radar from DASA, Ulm.