Millimeter-wave thermal imaging provides a unique autonomous capability for aircraft landing in adverse weather, giving a pilot a comprehensive view of runway location and availability in real time with high fidelity. ThermoTrex Corporation has reported previous results from a passive millimeter-wave camera demonstration device. The addition of W-band low-noise amplifiers into the front end of this sparse phased-array thermal imaging camera has improved system thermal sensitivity by 5 dB over that previously reported. Processing upgrades have increased system frame update rate to about 1 Hz, and remote site field testing has established phenomenology relevant to aircraft landing guidance applications. Next-generation hardware design has addressed the issue of aircraft integration using an innovative lightweight, X-band antenna for 89 GHz thermal imaging. A flightworthy demonstration imager using this antenna is currently under construction for 10 Hz operation.
This paper presents a guideline to meet the requirements of forward looking sensors of an enhanced vision system for both military and civil transport aircraft. It gives an update of a previous publication with special respect to airborne application. For civil transport aircraft an imaging mm-wave radar is proposed as the vision sensor for an enhanced vision system. For military air transport an additional high-performance weather radar should be combined with the mm-wave radar to enable advanced situation awareness, e.g. spot-SAR or air to air operation. For tactical navigation the mm-wave radar is useful due to its ranging capabilities. To meet these requirements the HiVision radar was developed and tested. It uses a robust concept of electronic beam steering and will meet the strict price constraints of transport aircraft. Advanced image processing and high frequency techniques are currently developed to enhance the performance of both the radar image and integration techniques. The advantages FMCW waveform even enables a sensor with low probability of intercept and a high resistance against jammer. The 1997 highlight will be the optimizing of the sensor and flight trials with an enhanced radar demonstrator.
ALG is a combination of raster imaging sensor, head-up displays, flight guidance and procedures which allow pilots to perform hand flown aircraft maneuvers in adverse weather, at night, or in low visibility conditions at facilities with minimal or no ground aids. Maneuvers in the context of ALG relate to takeoff, landing, rollout, taxi and terminal parking. Commercial needs are driven by potential revenue savings since today only 43 Type III and 80 Type II instrumented landing system (ILS) runway ends in the United States are equipped for lower minimum flight operations. Additionally, most of these ILS facilities are clustered at major gateway airports which further impacts on dispatch authority and general ATC regional delays. Infrastructure consists to upgrade additional runways must not only account for the high integrity ground instrumentation, but also the installation of lights and markers mandated for Cat III operations. The military services ability to train under realistic battlefield conditions, to project power globally in support of national interests, while providing humanitarian aid, is significantly impaired by the inability to conduct precision approaches and landings in low visibility conditions to either instrumented runways or to a more tactical environment with operations into and out of unprepared landing strips, particularly when time does not permit deployment of ground aids and the verification of their integrity. Recently, Lear Astronics, in cooperation with Consortium members of the ALG Program, concluded a flight test program which evaluated the utility of the ALG system in meeting both civil and military needs. Those results are the subject of this paper.
A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.
One means of locating and classifying obstacles in video images is through the analysis of optical flow, defined as the motion on the image surface that results from camera and/or object motion. We present a computer vision algorithm that analyzes two classes of optical flow to determine the direction of a moving object captured in image sequences. First, the optical flow is computed with the Lucas-Kanade gradient-based technique. Then, the optical flow at each point in the image is decomposed into a translation component and an expansion component. Translation optical flow can be caused by motion perpendicular to the line of sight, whereas expansion optical flow can be caused by motion along the line of sight. By analyzing both types of optical flow, our computer vision algorithm successfully classifies scenarios into four types: (1) a collision condition; (2) a crossing condition, in which the target crosses in front of the camera; (3) a passing condition, in which the target passes to the side of the camera; and (4) a safe condition, in which the target travels away from the camera.
Today's aircraft equipment comprise several independent warning and hazard avoidance systems like GPWS, TCAS or weather radar. It is the pilot's task to monitor all these systems and take the appropriate action in case of an emerging hazardous situation. The developed method for detecting and avoiding flight hazards combines all potential external threats for an aircraft into a single system. It is based on an aircraft surrounding airspace model consisting of discrete volume elements. For each element of the volume the threat probability is derived or computed from sensor output, databases, or information provided via datalink. The position of the own aircraft is predicted by utilizing a probability distribution. This approach ensures that all potential positions of the aircraft within the near future are considered while weighting the most likely flight path. A conflict detection algorithm initiates an alarm in case the threat probability exceeds a threshold. An escape manoeuvre is generated taking into account all potential hazards in the vicinity, not only the one which caused the alarm. The pilot gets a visual information about the type, the locating, and severeness o the threat. The algorithm was implemented and tested in a flight simulator environment. The current version comprises traffic, terrain and obstacle hazards avoidance functions. Its general formulation allows an easy integration of e.g. weather information or airspace restrictions.
Collision avoidance is of concern to all aircraft, requiring the detection and identification of hazardous terrain or obstacles in sufficient time for clearance maneuvers. The collision avoidance requirement is even more demanding for helicopters, as their unique capabilities result in extensive operations at low-altitude, near to terrain and other hazardous obstacles. TO augment the pilot's visual collision avoidance abilities, some aircraft are equipped with 'enhanced-vision' systems or terrain collision warning systems. Enhanced-vision systems are typically very large and costly systems that are not very covert and are also difficult to install in a helicopter. The display is typically raw images from infrared or radar sensors, and can require a high degree of pilot interpretation and attention. Terrain collision warning system that rely on stored terrain maps are often of low resolution and accuracy and do not represent hazards to the aircraft placed after map sampling. Such hazards could include aircraft parked on runway, man- made towers or buildings and hills. In this paper, a low cost dual-function scanning pencil-beam, millimeter-wave radar forward sensor is used to determine whether an aircraft's flight path is clear of obstructions. Due to the limited space and weight budget in helicopters, the system is a dual function system that is substituted in place of the existing radar altimeter. The system combines a 35 GHz forward looking obstacle avoidance radar and a 4.3 GHz radar altimeter. The forward looking 35 GHz 3D radar's returns are used to construct a terrain and obstruction database surrounding an aircraft, which is presented to the pilot as a synthetic perspective display. The 35 GHz forward looking radar and the associated display was evaluated in a joint NASA Honeywell flight test program in 1996. The tests were conducted on a NASA/Army test helicopter. The test program clearly demonstrated the systems potential usefulness for collision avoidance.
One of the main advantages of millimeter wave (MMW) imaging radar systems result from the fact that their imaging performance does nearly not depend on atmospheric affects such as fog, rain and snow. That is the reason that MMW radar seems to be one of the most promising sensors for enhanced vision systems (EVS), which can aid the pilot during approach, landing and taxiing, especially under bad weather conditions. Compared to other imaging devices, MMW radar system deliver a lower image resolution and update rate, and have a worse signal to noise ratio. Moreover, the commonly proposed method of the perspective view projection in EVS applications results in some imaging errors and artifacts. These sensor specific effects should be taken into account during the presently conducted EVS research and development. To get the opportunity of studying imaging radar systems in ground based research environments, we have developed a new type of a MMW radar sensor simulator. Our approach is based on detailed terrain and/or airport databases, as they are available for normal visual simulations or VR applications. We have augmented these databases with some specific attributes which describe object surface properties with respect to MMW. Our approach benefits from the state of the art of high speed computer graphics hard- and software. It is implemented in C/C++ and uses the Open GL graphic standard and the SGI Performer database handler. It runs on every SGI graphic workstation, and achieves an image update rate of about 20 Hz, which is more than actual available radar systems deliver. One of the main advantages of our approach is, that it can be integrated easily in emergent multisensor based enhanced vision systems and it is a useful tool for EVS research and development.
A perspective primary flight and a navigation display format were evaluated in a flying testbed. The flight tests comprised ILS- and standard approaches as well as low level operations utilizing the depiction of a spatial channel, and demonstrations of the inherent ground proximity warning function. In the cockpit of the VFW614, the left seat was equipped with a sidestick and a flat panel display, which showed both the 4D-display an the Navigation Display format. Airline and airforce pilots flew several missions each. Although most of the pilots criticizes that a typical flight director commanding the aircraft's attitude was missing, they could follow the channel precisely. However, some airline pilots stated a lack of vertical guidance information during the final approach. Leaving and re- entering the channel could be easily accomplished form any direction. In summary pilots' assessment of the display concept yielded an overall improvement of SA. In particular it was stated that displays are an appropriate means to avoid CFIT accidents. With the fist prototypes of 3D- graphics generators designed for avionics available the flight evaluation will continue including feasibility demonstrations of high-performance graphics for civil and military aircraft applications.
Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.
We describe a sophisticated for the collection of numerous streams of image and aircraft state data from an airborne platform. The system collects and stores 7 different sources of analog video data; 3 separate sources of digital video data at aggregate rates of up 32 Megabytes per second, to a removable tape device of 100 Gigabytes or 50 minutes' capacity; low-bandwidth aircraft state information from inertial sources; and other ancillary data. Data from all sources are timestamped with a common time source for synchronization. The task of accurately timestamping multiple disparate data sources is a challenging one, and it is discussed in some detail. Although the technology that can be applied to this kind of effort advances and changes rapidly, certain design paradigms remain valid independent of the specific implementation hardware. General principles of design and operation, as well as system specifics, are described; it is hoped that this record will be a useful reference for future efforts of this kind.
In synthetic vision systems (SVS) environmental data and mission critical information must be provided to pilots and system components. For systems with demanding visual graphics representations or enhanced ground proximity warning systems (EGPWS), databases offer data in high resolution with distinct features. Investigations show that pre- and in-flight services and systems such as Aeronautical Publications (AIP), flight planning, map creation, FMS, flight displays, or EGPWS are all based on dependent data from few sources. They are usually terrain, cultural, flight related, weather, or NOTAMS based. Our concept proposes a single relational high quality database (HQ-DB) for all of the above described applications. It allows to store worldwide information in an appropriate resolution with a verified quality. Obviously such a HQ-DB will not be carried in an aircraft. The amount of data is too enormous and geographical information system storage formats do not allow real-time extraction of data. Therefore, for every application a separate real-time on board database (RTO-DB) is derived from the HQ-DB. In the off-line RTO-DB creation process, data is converted in a real-time capable graphics data format formed by tiles with integrated level of detail and geometric representations of synthetic objects. During a mission, the database server integrates pilot inputs and changes via data link. Manual inputs changing the appearance of primary flight displays can also directly influence the RTO-DB. The resulting data is sent to the application using this data. In our system this database concept is used for generic flight guidance displays, for a simulator visual display system, and for a general algorithmic flight hazard warning system.
The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.
Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. A computer-vision based system, which is able to observe the helicopter flight state during hover and low speed, based on the detection and tracking of significant but arbitrary features, has been developed by the Institute of Flight Mechanics of DLR Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. The approach is as follows: A CCD camera looks straight downward to the ground and produces an image of the ground view. The digitized video signal is fed into a high performance on- board computer which looks for distinctive features in the image. Any motion of the helicopter results in movements of these patterns in the camera image. By tracking the distinctive features during the succession of incoming images and by the support of inertial sensor data, it is possible to calculate all necessary helicopter state variables, which are needed for a position hold control algorithm. This information is gained from a state variable observer. That means that no additional information about the appearance of the camera view has to be known in advance to achieve autonomous helicopter hover stabilization. The hardware architecture for this image evaluation system mainly consists of several PowerPC processors which communicate with the aid of transputers and an image distribution bus. Feature tracking is performed by a dedicated 2D-correlator subsystem. The paper presents the characteristics of the computer vision sensor and demonstrates its functionality.
The reference data consists of two or more central- projection images of a 3D distribution of object points, i.e., a 3D scene. The positions and orientations of the cameras which generated the reference images are unknown, as are the coordinates of all the object points. We derive and demonstrate invariant methods for synthesizing nadir and perspective views of the object points, e.g., 2D maps of the 3D scene. The techniques we will demonstrate depart from standard methods of resection and intersection which first recover the camera geometry, then reconstruct object points from the reference images, and finally back-project to create the nadir or perspective views. The first steps in our 'invariant methods' approach are to perform the image measurements and computations required to estimate the image invariant relationships linking the reference images to one another and to the new nadir and perspective views. The empirically estimated invariant relationships can thereafter be use to transfer conjugate points and lines from the reference images to their synthesized conjugates in the nadir and perspective views. Computation of the object model...the digital elevation model...is not required in this approach. In this paper we developed algorithms for invariant transfer of conjugate lines which exploit the synergy of line and point transfer. We validate our algorithms with synthetic CAD models. A subsequent paper will validate the line transfer algorithms with uncontrolled aerial imagery and maps with occasional missing or inaccurately delineated features.
We describe a sensor fusion algorithm based on a set of simple assumptions about the relationship among the sensors. Under these assumptions we estimate the common signal in each sensor, and the optimal fusion is then approximated by a weighted sum of the common component in each sensor output at each pixel. We then examine a variety of techniques to map the sensor signals onto perceptual dimensions, such that the human operator can benefit from the enhanced fused image, and simultaneously, be able to identify the source of the information. We examine several color mapping schemes.
Two recently developed color image fusion techniques, the TNO fusion scheme and the MIT fusion scheme, are applied to visible and thermal images of military relevant scenarios. An observer experiment is performed to test if the increased amount of detail in the fused images can yield an improved observer performance in a task that requires situational awareness. The task that is devised involves the detection and localization of a person in the displayed scene relative to some characteristic details that provide the spatial context. Two important results are presented. First, it is shown that color fused imagery leads to improved target detection over all other modalities. Second, results show that observers can indeed determine the relative location of a person in a scene with a significantly higher accuracy when they perform with fused images, compared to the original image modalities. The MIT color fusion scheme yields the best overall performance. Even the most simple fusion scheme yields an observer performance that is better than that obtained for the individual images.
The use of night vision devices (NVDs) by US Army foot soldiers, aviator,s and drivers of combat and tactical wheeled vehicles has enhanced operations at night by allowing increased mobility and potentially safer operations. With this increased capability in the night environment has come an increased exposure to the hazards of that environment and the risks that the command structure must manage and balance with mission requirements. Numerous vehicular accidents have occurred during night filed exercises involving drivers wearing image intensification (I2) systems. These accidents can frequently be attributed to perceptual problems experienced by the drivers. Performance with NVDs generally increases with practice and experience. However, there is little formal training provided in night driving skills and few opportunities to practice these skills under realistic conditions. This paper reports the approach and preliminary result of an effort to define and demonstrate a low-cost night driving simulator concept for training night driving skills with I2 devices and to identify and evaluate the techniques and resources that are available for implementing this approach.
This paper describes a modification to an Army Driver's Enhancement (DVE) system. The FLIR turret gimbals are instrumented with resolvers to enable a feedback control loop. The FLIR video is displayed on a head-mounted binocular display system which has an integrated head tracker for pointing the FLIR turret. The paper describes and compares several candidate head tracker technologies. An acoustic head tracker was selected as a baseline to demonstrate the concept in a simple and quick manner. Additionally, a hybrid optical-inertial tracker implementation is discussed which also has capability to minimize effects of vibration on the DVE system.
The electronic content of the automobile is growing every year. American business analysts estimate that the electrical/electronic content of the average vehicle will be $2000Nehicle by the year 2000. This represents an electrical component industry of $68 Billion/year (using 1995 total world vehicle sales figure). As the demand for increased functionality grows, the complexity of vehicle design increases. At the same time, the need to reduce vehicle costs and the need to maintain high standards ofquality and reliability are forever present. Automobile manufactures have the need to minimise the time taken between the decision to build a particular model and the arrival ofthe first production vehicle (or Job #1). Jaguar's XK8 took 30 months from Project Approval to Job #1. The XK8 included many systems which had been deployed by Jaguar for the first time. Such as the all new V8 powerplant, drive by wire, serial communications and active ride control. Maintaining high levels of quality and reliability within short cycle times requires effective partnerships between Automobile Manufactures and System, or Sub-System, Suppliers. Future vehicle designs will be more dependent on these relationships than ever before with new requirements being generated by pending legislation and by new functionality which will be offered in the next few years.
The Houston Ship Channel ranks as America's number one port in foreign tonnage by welcoming more than 50,000 cargo ships and barges annually. Locally 196,000 jobs, 5.5 billion dollars in business revenue and 213 million dollars in taxes are generated. Unfortunately, 32 days of each year vessel traffic stops for hours due to fog causing an estimated 40- 100 million dollars loss as ships idly wait in the channel for weather to clear. In addition, poor visibility has contributed to past vessel collisions which have resulted in channel closure, and associated damage to property and the environment. Today's imaging technology for synthetic vision systems and enhanced situational awareness systems offers a new solution to this problem. Whereas, typically these systems have been targeted at aircraft landing systems the channel navigation application provides a peripheral ground based market. This paper describes two imaging solutions to the problem. One using an active 35 GHz scanning radar and the other using a 94 GHz passive millimeter wave camera.
Understanding and characterizing the forward environment of a ground vehicle is a pivotal element in determining the appropriate maneuver-response strategy while under varied degrees of vehicle automation. Potential degrees of automation span the probable near-term adoption of longitudinal crash countermeasure warning devices all the way through the longer-term objective of full vehicle automation. Between these extremes lies partially automated longitudinal crash avoidance, a potentially rich area of application for synthetic vision. This paper addresses the application of synthetic vision to vehicle automation from a systems perspective: from development of a collision avoidance framework, to application of the appropriate sensor-environmental descriptions with which synthetic vision applications can address. Obstacle detection modules, the human cognitive component and the dynamics of automated ground vehicle control comprise elements of this framework. Areas to fulfill a structured program and suggested areas for further in-depth research are identified.
The main problem in scene matching is the differences between multi-sensor images, such as resolution difference and gray-level difference, which make it very difficult to register two images. This paper describes the statistic properties and an autocorrelation model of the gray-level difference between these images to attempt to rectify the gray-level of sensed image to solve the problem. It is well known the Gaussian-Markov random process can be acquired from the output of a linear system whose input is Gaussian white noise. Supposing the gray-level difference is an ergodic wide-sense stationary 2D random field with zero mean value in a local region, the autocorrelation model of the gray-level difference is studied to identify a linear system through which the simulated difference distribution is acquired to rectify the gray-level of sensed image. After rectified, the gray-level of sensed image and reference image will be similar so that the registration is much easier. The validity of this method is verified by the experiment results with several pairs of aerial image and satellite image.
This paper presents a flight test methodology and performance measurement system for evaluation of enhanced vision systems (EVS). The architecture for the performance measurements system used on a low operating cost Cessna 402 EVS flight test aircraft and on the DARPA Autonomous Landing Guidance Boeing 727 flight test aircraft is described. The data collection and analysis system is presented in the context of civil aviation requirements. A summary of the flight test accomplishments with the performance measurements system to data is also presented.