Future cockpit and aviation applications require high quality airport databases. Accuracy, resolution, integrity, completeness, traceability, and timeliness  are key requirements. For most aviation applications, attributed vector databases are needed. The geometry is based on points, lines, and closed polygons. To document the needs for aviation industry RTCA and EUROCAE developed in a joint committee, the DO-272/ED-99 document. It states industry needs for data features, attributes, coding, and capture rules for Airport Mapping Databases (AMDB).
This paper describes the technical approach Jeppesen has taken to generate a world-wide set of three-hundred AMDB airports. All AMDB airports are DO-200A/ED-76  and DO-272/ED-99  compliant. Jeppesen airports have a 5m (CE90) accuracy and an 10-3 integrity. World-wide all AMDB data is delivered in WGS84 coordinates. Jeppesen continually updates the databases.
Eight 757 commercial airline captains flew 22 approaches using the Reno Sparks 16R Visual Arrival under simulated Category I conditions. Approaches were flown using a head-down synthetic vision display to evaluate four tunnel ("minimal", "box", "dynamic pathway", "dynamic crow's feet") and three guidance ("ball", "tadpole", "follow-me aircraft") concepts and compare their efficacy to a baseline condition (i.e., no tunnel, ball guidance). The results showed that the tunnel concepts significantly improved pilot performance and situation awareness and lowered workload compared to the baseline condition. The dynamic crow's feet tunnel and follow-me aircraft guidance concepts were found to be the best candidates for future synthetic vision head-down displays. These results are discussed with implications for synthetic vision display design and future research.
Up to now most Enhanced Vision Systems have been based on IR-sensors. Although the penetration of bad weather (dense fog and light rain) by MMW-radar is remarkably better than in the infrared spectrum MMW sensors still have the disadvantage that radar data are often difficult to interpret. Therefore, it's not always possible for the pilot to obtain a reliable detection of runway structures within the radar images. However, prior field tests have shown that the installation of two different types of radar retro-reflectors along the runway can ease the image analysis task significantly and can provide the visual cues necessary to perform precision straight-in landings. A set of corner reflectors has proven suitable to mark the runway edges needed to adjust for lateral deviations and a set of diplane reflectors provided cues to maintain a 3-degree glide path descend.
The present study obtains first objective human performance data to examine the question how efficient pilots are in utilizing these visual cues. The study tested seven VFR and seven IFR-rated pilots and used a low-fidelity human-in-the-loop visual tracking task to simulate a straight-in landing. Pilots were required to detect the lateral and vertical tracking error based on the intensity-coded visual cues provided by the simulated radar images. The study compares two display conditions derived from different spatial arrangements of the diplane reflectors that signal the glide path angles. The first, the so-called "Radar-PAPI", was a horizontal row arrangement of four diplanes, and the second, the "Radar VASI", was a two-over-two arrangement of four diplanes. A third condition simulated the existing visual color coded PAPI landing aid and served as a baseline reference. Performance evaluation was based on the calculation of the root-mean-square error for both axis and subjective preference statements of the pilots.
Generation of crowded realistic scenes that include thousands of 3D objects for low altitude flights is not a widely covered topic. In this study we tried to generate immersive and dynamic realistic scenes for low altitude flights by using low cost PC configuration. At the end of the study encouraging results were obtained. Some features offered by high-end image generators regarding out-the-window scene were transported into our implementation. Hundreds of moving human beings, vehicles with realistic motion and very dense forests are rendered at interactive frame rates resulting in a low cost/high performance application. Limited number of pilots reviewed the scenes generated by our application and were satisfied.
We present a method to give single band intensified nightvision imagery a natural day-time color appearance. For input, the method requires a true color RGB source image and a grayscale nightvision target image. The source and target image are both transformed into a perceptually decorrelated color space. In this color space a best matching source pixel is determined for each pixel of the target image. The matching criterion uses the first order statistics of the luminance distribution in a small window around the source and target pixels. Once a best matching source pixel is found, its chromaticity values are assigned (transferred) to the target pixel while the original luminance value of the target pixel is retained. The only requirement of the method is that the compositions of the source and target scenes resemble each other.
The paper describes flight trials performed in Reno, NV. Flight trial were conducted with a Cheyenne 1 from Marinvent. Twelve pilots flew the Cheyenne in seventy-two approaches to the Reno airfield. All pilots flew completely andomized settings. Three different settings (standard displays, 2D moving map, and 2D/3D moving map) were evaluated. They included seamless evaluation for STAR, approach, and taxi operations. The flight trial goal was to evaluate the objective performance of pilots compared among the different settings. As dependent variables, positional and time accuracy were measured. Analysis was conducted by an ANOVA test. In parallel, all pilots answered subjective Cooper-Harper, situation awareness rating technique (SART), situational awareness probe (SAP), and questionnaires.This article describes the human factor analysis from flight trials performed in Reno, NV. Flight trials were conducted with a Cheyenne 1 from Marinvent. Thirteen pilots flew the Cheyenne in seventy-two approaches to the Reno airfield. All pilots flew completely randomized settings. Three different display configurations: Elec. Flight Information System (EFIS), EFIS and 2D moving map, and 3D SVS Primary Flight Display (PFD) and 2D moving map were evaluated. They included normal/abnormal procedure evaluation for: Steep turns and reversals, Unusual attitude recovery, Radar vector guidance towards terrain, Non-precision approaches, En-route alternate for non-IFR rated pilots encountering IMC, and Taxiing on complex taxi-routes.
The flight trial goal was to evaluate the objective performance of pilots for the different display configurations. As dependent variables, positional and time data were measured. Analysis was performed by an ANOVA test. In parallel, all pilots answered subjective NASA Task Load Index, Cooper-Harper, Situation Awareness Rating Technique (SART), and questionnaires.
The result shows that pilots flying 2D/3D SVS perform no worse than pilots with conventional systems. In addition, 3D SVS flying pilots have significantly better terrain awareness, more stable 180° deg turns, and enhanced positional awareness while taxiing on the ground. Finally, even non-IFR rated pilots are able to fly non-precision approaches under IMC with a 3D SVS.
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. This experiment evaluated the influence of different tunnel and guidance concepts upon pilot situation awareness (SA), mental workload, and flight path tracking performance for Synthetic Vision display concepts using a Head-Up Display (HUD). Two tunnel formats (dynamic, minimal) were evaluated against a baseline condition (no tunnel) during simulated IMC approaches to Reno-Tahoe International airport. Two guidance cues (tadpole, follow-me aircraft) were also evaluated to assess their influence on the tunnel formats. Results indicated that the presence of a tunnel on an SVS HUD had no effect on flight path performance but that it did have significant effects on pilot SA and mental workload. The dynamic tunnel concept with the follow-me aircraft guidance symbol produced the lowest workload and provided the highest SA among the tunnel concepts evaluated.
Helicopters often strike against obstacles such as power lines. We are developing an obstacle detection and warning system for civil helicopters to reduce such collisions. A color camera, an Infrared (IR) camera and a Millimeter Wave (MMW) radar are employed as its sensors. This paper describes an image and data fusion of color and infrared images with the millimeter wave information. An outline of the obstacle detection and warning system is described first. Then, we propose a newly developed on-board system based on a fast AD converter. A new algorithm is also proposed to identify the nearest target using the radar signal where there are other far large-RCS obstacles. As the result, the system can achieve 30 cycles per second of IR and color image acquisition, radar data processing, distance calculation, fusing all data and displaying them. Finally, we propose a future plan for flight experiments planned in this year.
A PC-based image generator SIMTERM developed for training operators of non-airborne military thermal imaging systems is presented in this paper. SIMTERM allows its users to generate images closely resembling thermal images of many military type targets at different scenarios obtained with the simulated thermal camera. High fidelity of simulation was achieved due to use of measurable parameters of thermal camera as input data. Two modified versions of this computer simulator developed for designers and test teams are presented, too.
The results of this experiment show that an aircraft primary flight display (PFD) with a flight path superimposed on a synthetic vision system (SVS) terrain image demonstrates a viable means for a pilot to confidently and consistently control an aircraft while flying highly accurate precision approaches to a 200 foot decision height (DH). The pathway, depicted as a Highway-In-The-Sky (HITS) in the display, provides a predictive method, as opposed to the reactive method associated with conventional needle and dial instruments, for controlling an aircraft. The intuitive nature of the HITS/SVS architecture provides greater situational awareness, less pilot workload, and improved accuracy during instrument flying compared to the conventional instrument landing system (ILS) round dials and needles.
Enhanced Vision Systems (EVS) are currently developed with the goal to alleviate restrictions in airspace and airport capacity in low visibility conditions. Existing EVS-systems are based on IR-sensors although the penetration of bad weather (dense fog and light rain) by MMW-radar is remarkably better than in the infrared spectrum. But the quality of MMW radar is rather poor compared to IR images. However, the analysis of radar images can be simplified dramatically when simple passive radar retro-reflectors are used to mark the runway. This presentation is the third in a series of studies investigating the use of such simple landing aids. In the first study the feasibility of the radar PAPI concept was determined; the second one provided first promising human performance results in a low-fidelity simulation. The present study examined pilot performance, workload, situation awareness, and crew coordination issues in a high-fidelity simulation of 'Radar-PAPI' visual aids supporting a precision straight-in landing in low visibility (CAT-II). Simulation scenarios were completed in a fixed-base cockpit simulator involving six two-pilot flight-deck crews. Pilots could derive visual cues to correct lateral glide-path deviations from 13 pairs of runway-marking corner reflectors. Vertical deviations were indicated by a set of six diplane reflectors using intensity-coding to provide the PAPI categories needed for the correction of vertical deviations.
The study compared three display formats and associated crew coordination issues: (1) PF views a head-down B-scope display and switches to visual landing upon PNF's call-out that runway is in sight; (2) PF views a head-down C-scope display and switches to visual landing upon PNF's call-out that runway is in sight; (3) PF views through a head-up display (HUD) that displays primary flight guidance information and receives vertical and lateral guidance from PNF who views a head-down B-scope. PNF guidance is terminated upon PF's call-out that runway is in sight.
NASA's Aviation Safety Program, Synthetic Vision Systems Project is conducting research in advanced flight deck concepts, such as Synthetic/Enhanced Vision Systems (S/EVS), for commercial and business aircraft. An emerging thrust in this activity is the development of spatially-integrated, large field-of-regard information display systems. Head-worn or helmet-mounted display systems are being proposed as one method in which to meet this objective. System delays or latencies inherent to spatially-integrated, head-worn displays critically influence the display utility, usability, and acceptability. Research results from three different, yet similar technical areas - flight control, flight simulation, and virtual reality - are collectively assembled in this paper to create a global perspective of delay or latency effects in head-worn or helmet-mounted display systems. Consistent definitions and measurement techniques are proposed herein for universal application and latency requirements for Head-Worn Display S/EVS applications are drafted. Future research areas are defined.
To enable safe use of Synthetic Vision Systems at low altitudes, real-time range-to-terrain measurements may be required to ensure the integrity of terrain models stored in the system. This paper reviews and extends previous work describing the application of x-band radar to terrain model integrity monitoring. A method of terrain feature extraction and a transformation of the features to a common reference domain are proposed. Expected error distributions for the extracted features are required to establish appropriate thresholds whereby a consistency-checking function can trigger an alert. A calibration-based approach is presented that can be used to obtain these distributions. To verify the approach, NASA's DC-8 airborne science platform was used to collect data from two mapping sensors. An Airborne Laser Terrain Mapping (ALTM) sensor was installed in the cargo bay of the DC-8. After processing, the ALTM produced a reference terrain model with a vertical accuracy of less than one meter. Also installed was a commercial-off-the-shelf x-band radar in the nose radome of the DC-8. Although primarily designed to measure precipitation, the radar also provides estimates of terrain reflectivity at low altitudes. Using the ALTM data as the reference, errors in features extracted from the radar are estimated. A method to estimate errors in features extracted from the terrain model is also presented.
This study supplements prior and concurrent field trials testing the operational benefit of an Advanced Surface Movement Guidance and Control System (A-SMGCS). A-SMGCS comprises a range of new technologies for both the flight deck and the air traffic control tower enabling more efficient and safe airport surface movement. These technologies are expected to significantly increase the throughput at presently highly congested major airports without compromising safety. A flight deck A-SMGCS module is the onboard guidance system TARMAC-AS. This module consists of a controller pilot data link (DL) communication and an electronic moving map (EMM), which also displays airport surface traffic information to the pilot crew. TARMAC-AS is evaluated in an investigation involving twenty commercial pilots who performed a series of approach, landing and taxiing simulation trials that were completed in a fixed-base cockpit simulator. Evaluation was based on subjective questionnaires, effectiveness of taxi operation, and visual scanning strategies derived from eye-point-of-gaze measurements. Results support the notion that EMM + DL improve awareness of the global airport surface situation, particularly under conditions of low visibility, enabling more efficient and timely surface movements and avoidance of conflicting traffic. A potential negative impact of increased head-down times was not substantiated.
Commercial airplanes are now a weapon of mass destruction to be used in asymmetric warfare against the United States. There is a clear need for enhanced situational awareness within the passenger cabin of airplanes. If the crew suspected that the security of an aircraft had been compromised it would be critical for a crew member to be able to clearly and rapidly see what is occurring inside the passenger cabin without having to open the door to the cockpit. In case of emergency it would also be extremely valuable for ground personnel and aircraft responding to the emergency to be able to visually monitor what is happening inside the aircraft cabin.
This contribution summarizes DLR's recent development of a considerable robust and reliable method to estimate the relative position of an aircraft with respect to a runway based on camera images only (TV, infrared or even PMMW radar). The special advantage of the proposed method is, that neither a calibrated camera (referring to focus length and mounting angles relative to the aircraft) is required, nor any knowledge of special points of the runway (3-D world coordinates and their 2-D identification within the image) is needed. The only reference to the 3-D world, which has to be known, is the width of the runway stripe. The proposed algorithm computes the relative height of the aircraft above the runway stripe and the lateral deviation from the centre line of the runway as well. Additionally, several image analysis procedures are presented which allow to detect the runway stripe by either grouping the asphalt/grass border lines, or by analyzing the alignment structure of runway lights. The proposed image analysis method fulfills real-time requirements and has been tested with several image sequences acquired with different types of IR-cameras.
We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we will summarize the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light Visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using fused Visible/MWIR/LWIR imagery.
Enhanced Vision Systems (EVS) combines imagery from multiple sensors, possibly running at different frame rates and pixel counts, on to a display. In the case of a Helmet Mounted Display (HMD), the user line of sight is continuously changing with the result that the sensor pixels rendered on the display are changing in real time. In an EVS, the various sensors provide overlapping fields of view which requires stitching imagery together to provide a seamless mosaic to the user. Further, different modality sensors may be present requiring the fusion of imagery from the sensors. All of this takes place in a dynamic flight environment where the aircraft (with fixed mounted sensors) is changing position and orientation while the users are independently changing their lines of sight. In order to provide well registered, seamless imagery, very low throughput latencies are required, while dealing with huge volumes of data. This provides both algorithmic and processing challenges which must be overcome to provide a suitable system. This paper discusses system architecture, efficient stitching and fusing algorithms, and hardware implementation issues.
Common infrared video imagery can experience large variations in signal level across different portions of a scene. Global image processing techniques are not capable of using standard displays to show both large variations and detail within individual regions of interest. For this reason, local image processing approaches have been developed to increase contrast in localized areas. These are typically high latency, post-video techniques targeted for specific applications. We have developed a unique video processing approach that has near-zero latency and is not computationally intensive, so imagery can be processed and displayed for real-time human observation using minimal hardware. Local scaling factors are computed using a flexible distribution technique, allowing adjustable levels of sensitivity and local detail enhancement.