In this paper we describe a prototype self-aligning spatial filter (SASF). We present studies of the design and the results of fabrication prior to the final processing step. The SASF consists of an electrostatically actuated platform on which an optical spatial filter (pinhole) has been fabricated. The pinhole is in the center of a four quadrant split-cell photodetector, which serves as the alignment gauge for the system. When a focused beam at the pinhole is aligned, all four detectors sense the same optical current. In future devices, this information from the photodetectors will be fed back to the electrostatic actuation system to push the platform and align the beam. The electrostatic actuators are formed from the parallel walls of vertical side- wall capacitors built between the silicon bulk and the movable platform. Electrical signal paths in the integrated system used diffused interconnects, while the photodetectors are simply reverse-biased p+n diodes. Fabrication techniques are similar to surface micromachining, except that a wafer bonding step is used to create single crystal structures.
Limits on the precision of small accelerometers for inertial measurement units are enumerated and discussed. Scaling laws and errors which affect the precision are discussed in terms of tradeoffs between size, sensitivity, and cost. Thermal noise and displacement transducer sensitivity constrain the size of an accelerometer for a given sensitivity, and error correction for inertial navigation leads to a tradeoff between cost and precision. Emphasis is placed on micromachined silicon accelerometers as a potential technology for manufacturing low cost, precision sensors, and sample calculations are given to illustrate the principles.
Currently under development at SatCon Technology Co. is a proof of principle, angular rate sensing gyroscope which operates with a closed loop, electrostatic suspension. The prototype design will utilize an ohmically isolated rotor disc driven by multi-phase radial torque actuation. The rotor will be suspended radially and axially by fields produced from appropriately placed electrode elements. Angular rate will be detected as rebalanced rotor tilt. The inclusion of shared function electrodes for the sensors and actuators will minimize VLSI micro-machining complexity. To reduce stray capacitance, sensor FET guard driver circuitry will be placed close to the VLSI structure by means of hybridized construction. Operation of key subsystems such as rebalance/rotation sensors and the electrostatic actuators will be evaluated before the prototype geometry is fixed with the aid of limited function micromachined structures. A set of these preliminary structures has recently been received after fabrication at MCNC. The structures have been examined and appear to fit our geometric and electrical specifications. Testing with electronics is underway. Results of these evaluations and the updated prototype structure designs are discussed.
Terrain-aided navigation (TAN), also referred to as terrain correlation, is a technique that has proven to be highly successful as a navigational aid for autonomous, unmanned guided missiles. Qualitatively speaking, the effectiveness of terrain correlation is a function of signal- to-noise (S/N) ratio. The signal is equivalent to terrain roughness, while the noise is the combination of reference map errors, radar altimeter errors, and INS altitude errors. However, it is not practical to use only a single parameter, such as S/N, to define the suitability of terrain correlation. This paper discusses the shortcomings of the conventional single-parameter approach to the terrain contour matching algorithm (TERCOM) used in cruise missile guidance systems scene selection. A more comprehensive technique is then presented that analyzes the terrain correlation suitability based on a Monte Carlo simulation technique. A figure-of-merit (FOM) for terrain correlation suitability, computed from sample statistics, is introduced and simulation results are provided to illustrate the feasibility of using a multi-parameter FOM technique. The preliminary results indicate that the proposed approach could provide a cost effective enhancement to the TAN-based mission planning process.
A sensor selection and design process for a small planetary rover is presented for the purpose of autonomous navigation and hazard avoidance. The selection process includes an overview of the options in methods and hardware, followed by the choice/design of sensors based on the performance requirements, vehicle constraints, and other factors. Emphasis is on the design methodology from a systems engineering perspective. Because of the need for prototype microrover demonstrations, real issues of cost and availability were strong drivers in the design. These issues, along with the platform limitations, have led to a practical sensor selection for an autonomous planetary microrover.
The National Aeronautics and Space Administration (NASA), the Federal Aviation Administration (FAA), and the United States Air Force have all expressed interest in the ability of laser radar to detect wind profiles at altitude. A number of programs are addressing the technical feasibility and utility of using laser radar atmospheric backscatter to determine wind profiles and wind hazards for aircraft guidance and navigation. This paper presents the results of ground and flight tests of a coherent 10.6 micrometers airborne laser radar for wind hazard detection. It also discusses the emerging capabilities of an airborne, all-solid-state, 2 micrometers laser radar for performing these functions in a rugged, smaller, lighter weight, high- performance package.
This paper presents the theory of ring laser which uses the optical pumping by two plane waves with different both frequencies and directions. If the group velocity of light in an active medium becomes equal to the phase velocity of running grating induced by the pumping rays, then the one-directional wave of an amplification arises lea-ding to ring laser generation with full suppress of those modes that run oppositely to the amplification wave direc-tion. As the applications of the theory there are considered determinator of angular coordinates of remote target and laser gyroscope with highest sensitivity.
A new type of laser-fiber sensor for measuring and monitoring a big size body (for example the rocket body) surface's relative shift, which is produced in a strong vibration system, has been studied. The laser-fiber sensor is a combination of laser, fiber, computer, etc. and has features of simple, quick, and high accuracy. The light of a semiconductor laser with a wavelength of (lambda) equals 780 nm passes through a 12 m long multi-mode fiber and an objective lens, then irradiates on the surface measured. Another objective lens collects the reflected light from the surface measured and images it on a 2048 charge coupled device (CCD), then a 286 computer processes the signal from the CCD. The sampling frequency of measuring data of this laser-fiber sensor is 800 Hz and its measuring accuracy is better than +/- 0.05 mm. The measuring range of relative shift is about +/- 15 mm. It has the ability to measure and monitor the related shift of moving objects in a strong vibration system.
The Space Shuttle acceleration environment is characterized. The acceleration environment is composed of a residual or quasi-steady component and higher frequency components induced by vehicle structural modes and the operation of onboard machinery. The orbiter structural modes in the 1 - 10 Hz range, are excited by oscillatory and transient disturbances and tend to dominate the energy spectrum of the acceleration environment. A comparison of the acceleration measurements from different space shuttle missions reveals the characteristic signature of the structural modes of the orbiter overlaid with mission specific hardware induced disturbances and their harmonics. Transient accelerations are usually attributed to crew activity and orbiter thruster operations. Crew work and exercise tends to raise the accelerations to the 10-3go(1 milli-g) level. The use of vibration isolation techniques (both active and passive systems) during crew exercise have shown to significantly reduce the acceleration magnitudes.
During the second German Spacelab Mission D-2, extensive onboard measurements of the residual acceleration were performed. The payload was equipped with accelerometer packages distributed over the entire Spacelab module. The microgravity measurement assembly (MMA) was the core system comprising fixed mounted as well as mobile sensor packages. Additional autonomous accelerometer systems were mounted within the payload elements MEDEA and Werkstofflabor. On-board video recording has been performed to correlate the measured accelerations to mission events. The D-2 microgravity characterization program also included numerical calculations to predict low frequency effects due to atmospheric drag, tidal force, and spacecraft rotation. Results of characteristic quiet mission phases show that the microgravity level is essentially below the requirements defined for the space station. Other results of some other mission phases revealed that a wealth can be done by improving payload design and operation to improve the microgravity quality of Spacelab missions.
The navigation of autonomous aerial vehicles is achieved via an original numerical technique. The algorithm, which generalizes Kalman filter and the probabilistic data association filter (PDAF) to unnecessary Gaussian states, allows a high level of ambiguity in the observations (confusions, false alarms, and nondetections). Its efficiency and its low complexity are illustrated by a realistic simulation.
In this paper we review different applications involving the use of image sensors output for aerial navigation and guidance. Sensors under consideration include visible camera, IR line scanner, and forward looking IR camera. The interaction between classical (image free) navigation and image processing is emphasized. Examples and issues in image-based guidance are illustrated in areas such as aerial guidance and also space robotics.
In previous work we presented an algorithm for matching features extracted from an image with those extracted from a model, using a probabilistic relaxation method. Because the algorithm compares each possible match with all other possible matches, the main obstacle to its use on large data sets is that both the computation time and the memory usage are proportional to the square of the number of possible matches. This paper describes some improvements to the algorithm to alleviate these problems. The key sections of the algorithm are the generation, storage, and use of the compatibility coefficients. We describe three different schemes that reduce the number of these coefficients. The execution time is improved in each case, even when the number of iterations required for convergence is greater than in the unmodified algorithm. We show that the new methods also perform well, generating good matches in all cases.
Automated gust front detection is an important component of the airport surveillance radar with wind shear processor (ASR-9 WSP) and terminal Doppler weather radar (TDWR) systems being developed for airport terminal areas. Gust fronts produce signatures in Doppler radar imagery which are often weak, ambiguous, or conditional, making detection and continuous tracking of gust fronts challenging. A machine intelligent gust front algorithm (MIGFA) has been developed that makes use of two new techniques of knowledge-based signal processing: functional template correlation (FTC), a generalized matched filter incorporating aspects of fuzzy set theory; and the use of `interest' as a medium for pixel-level data fusion. This paper focuses on the more recently developed TDWR MIGFA, describing the signal-processing techniques used and general algorithm design. A quantitative performance analysis using data collected during recent real-time testing of the TDWR MIGFA in Orlando, Florida is also presented. Results show that MIGFA substantially outperforms the gust front detection algorithm used in current TDWR systems.
Lincoln Laboratory is developing a prototype of the Federal Aviation Administration (FAA) Integrated Terminal Weather System (ITWS) to provide improved aviation weather information in the terminal area by integrating data and products from various FAA and National Weather Service (NWS) sensors and weather information systems. The ITWS Microburst Prediction product is intended to provide an additional margin of safety for pilots in avoiding microburst wind shear hazards. The product is envisioned for use by traffic managers, supervisors, controllers, and pilots (directly via datalink). Our objective is to accurately predict the onset of microburst wind shear several minutes in advance.
The joint FAA/DoD/industry synthetic vision technology demonstration (SVTD), completed in 1993, is but one of the many efforts conducted over the past three decades to provide all- weather visibility to the pilot for landing, take-off, and airport surface operations. The data collected in that demonstration is currently being organized in a database for use by industry, academic and government researchers. This paper summarizes the SVTD program and describes the lessons learned. While the SVTD concluded that there appear to be no insurmountable obstacles to the implementation of synthetic vision the demonstration also underscored the need for further research and development to satisfy specific operational requirements. We discuss some of the issues not yet fully resolved and identify the data still needed and effort required to resolve these issues. Topics addressed include operational requirements, atmospheric conditions, sensor performance, image processing, display performance, image enhancement, data fusion and certification.
An active 35 GHz radar imaging system was developed and demonstrated as part of the synthetic vision system technology demonstration (SVSTD) program sponsored by the FAA, the USAF, and industry. During flight tests, an SVS equipped Gulfstream 2 aircraft made over 200 approaches at 27 different airports (and one aircraft carrier) in all types of weather. The 35 GHz imaging radar demonstrated its potential by allowing the test pilots to successfully land in adverse weather conditions that would have made a visual approach impossible. An overview of the radar system implementation architecture and flight test results is provided, along with perspectives on the lessons that were learned from the SVS flight tests. One objective of the SVSTD program was to explore several known system issues concerning radar imaging technology. The program ultimately resolved some of these issues, left others open, and in fact created several new concerns. The motivation for this paper is to identify major issues that were resolved, and provide researchers with a better understanding of the issues that remain open.
Problems associated with landing have always been of prime concern to the U.S Air Force laboratories. Early lab work in landing monitors suggested that employing display and imaging technologies could aid in flight conditions which required ground based signaling support. This paper recounts the evolution of a system concept to support the pilot in the most critical phases of flight. It also describes current efforts and the activities of the federal team now supporting this wide ranging technology application.
ThermoTrex Corporation (TTC) has developed an imaging radiometer, the passive microwave camera (PMC), that uses an array of frequency-scanned antennas coupled to a multi-channel acousto-optic (Bragg cell) spectrum analyzer to form visible images of a scene through acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output of the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. One application of this system could be its incorporation into an enhanced vision system to provide pilots with a clear view of the runway during fog and other adverse weather conditions. The unique PMC system architecture will allow compact large-aperture implementations because of its flat antenna sensor. Other potential applications include air traffic control, all-weather area surveillance, fire detection, and security. This paper describes the architecture of the TTC PMC and shows examples of images acquired with the system.
Maximum utilization of national airspace resources requires the development of systems which provide adverse weather landing guidance and allow for continued operations in low visibility conditions. The overall system, known as an enhanced situation awareness system (ESAS), encompasses a broad range of functions including a forward vision system (FVS). The FVS, the part of ESAS on which the paper focuses, consists of forward-looking, imaging sensors, and associated processors which collectively penetrate the atmospheric conditions. The FVS provides a spectrum of services to the flight crew and the aircraft in general. A series of image processing techniques crucial to FVS operation have been developed and implemented at Honeywell. The techniques fall into three core categories: image enhancement, feature extraction, and object recognition and tracking. In this paper, the issues involved in each category of processing are described, the most promising algorithms are described, and preliminary results of the image processing are presented. The sensor types explored to date include visible band TV, FLIR, and 35 GHz radar; results are shown on data from the visible band and 35 GHz radar imaging sensors.
In this paper, the costs and technology trends associated with generating each of six candidate synthetic scene presentation format examples are compared. The evaluated formats were: (1) symbolic, using FAA Symbol Set One; (2.a) the same, but enhanced to include a ridge line; (2.b) the same, but enhanced to include more ridge line detail; (3) the same, but enhanced to include a wireframe depiction of the terrain surface; (4) a sun-angle-shaded perspective format; (5) a texture mapped format; and (6) a photo-realistic format. Cost metrics included luminance, power, computational throughput, RAM and mass storage requirements. Of the format candidates evaluated, today's cost and trend data indicate that a photo-realistic technology under development at Honeywell offers significant potential. Finally, guidelines are provided to enable the reader to assess costs for format alternatives and other costs for assumptions beyond those presented here.
A study was conducted to compare three types of enhanced vision systems (EVS) from the human pilot's perspective. The EVS images were generated on a silicon graphics workstation to represent: an active radar-mapping imaging system, an idealized forward-looking infrared (FLIR) sensor system, and a synthetic wireframe airport database system. The study involved six commercial airline pilots. The task was to make manual landings using a simulated head- up display superimposed on the EVS images. In addition to the image type, the sensor range was varied to examine the effect of atmospheric attenuation on landing performance. A third factor looked at the effect of runway touchdown and centerline markings. The low azimuthal resolution of the radar images (0.3 degree(s)) appeared to have affected the lateral precision of the landings. Subjectively, the pilots were split between the idealized FLIR and wireframe images while the radar image was judged to be significantly inferior. Runway markings provided better lateral accuracy in landing and better vertical accuracy during the approach. Runway markings were unanimously preferred by the six pilots.
A series of potential system concepts for flight crew enhanced situation awareness systems (ESAS) have been defined. The functional requirements leading to the development of the different concepts are described. The various ESAS subsystem requirements are described as well as issues regarding successful implementation of ESAS. Outstanding needs for continued research into enabling ESAS technologies are also presented.
This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.
The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.
To enhance safety and expedite aircraft traffic control at airports, the Federal Aviation Administration (FAA) is in the process of developing automation aids for controllers and pilots. These automation improvements depend on reliable surveillance of the airport traffic, in the form of computerized target reports for all aircraft. One means of surveillance of the airport is primary radar. A short range radar of this type is called airport surface detection equipment or (ASDE). Lincoln Laboratory is participating in this development program by testing a system of surveillance and automation aids at Logan International Airport in Boston, Mass. This work is sponsored by the FAA. This paper describes the radar equipment being used for surface surveillance at Logan Airport and the characteristics of the radar images it produces. Techniques for automatic tracking of this radar data are also described along with a summary of the tracking performance that has been achieved. Two companion papers in this session relate to this same radar surveillance and provide more in-depth descriptions of the radar processing.
A synthetic vision system for enhancing the pilot's ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system, such as image enhancement, map generation, obstacle detection, collision avoidance, guidance, etc., are identified. The available technologies, some of which were developed at NASA, that are applicable to the aircraft ground navigation problem are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infrared images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.
Automation aids which increase the efficiency of the controller and enhance safety are being sought by the Federal Aviation Administration (FAA). This paper describes the target detection algorithms developed by the MIT Lincoln Laboratory as part of the airport surface traffic automation (ASTA) and runway surface safety light system (RSLS) programs sponsored by the FAA that were demonstrated at Logan International Airport in Boston, Mass. from September 1992 through December 1993. A companion paper to this conference describes the ASTA and RSLS system demonstration. Another companion paper describes the tracking algorithms. Real-time, parallel processing implementations of these surveillance algorithms are written in C++ on a Silicon Graphics Inc. Unix multiprocessor. The heavy reliance on commercial hardware, standard operating systems, object oriented design, and high-level computer languages allows a rapid transition from a research environment to a production environment.
MIT Lincoln Laboratory, under sponsorship of the FAA, has installed a modified Raytheon pathfinder x-band marine radar at Logan Airport in Boston, Mass. and has developed a real- time surveillance system based on the pathfinder's digitized output. The surveillance system provides input to a safety logic system that will ultimately activate a set of runway status lights. This paper describes the portion of the surveillance system following the initial clutter- rejecting preprocessing, described elsewhere. The overall mechanism can be simply described as a temporal constant false alarm rate front end followed by binary morphological operations including connected components feeding a scan-to-scan tracker. However, a number of refinements have been added leading to a system which is close to being fieldable. Both the special difficulties and the current solutions are examined. The radar hardware as well as the computational environment are discussed. An overview of the clutter rejection preprocessing is given, as well as physical and processing related challenges associated with the data. Algorithmic description of the current system is presented and its real-time implementation outlined. Performance statistics and envisioned algorithmic improvements are presented.
Fabrication processes for microactuator construction must provide the capability to achieve 3- D geometries with a large material base while being able to benefit from the low cost economy of scale of batch processing. These attributes are provided in part by the basic LIGA process originating in Germany which is able to form high aspect ratio metal components up to 500 micrometers in thickness with very low vertical run-out and is compatible with microelectronic processing. The process has been extended to allow geometries with thicknesses up to 1 cm via a low-strain performed photoresist sheet and solvent bonding with x ray exposure via the 2.5 GeV National Synchrotron Light Source (NSLS). Such results enable batch fabrication of parts suitable for larger precision engineered actuators and mechanisms. To demonstrate the extended process capabilities a magnetic micromotor has been constructed using electroplated permalloy and assembled LIGA defined components. The low-inertia of the small rotor sizes is demonstrated by a stepping micromotor with a 150 micrometers diameter rotor which achieved maximum rotational speeds over 150,000 rpm.
The Salyut Space Stations and their successor, the MIR Space Station, have provided microgravity environments for researchers to utilize. With recent political changes in the former Soviet Union and the new openness of the Russian aerospace community, increased opportunities for access to MIR as a permanent orbital research platform are available. Recently acquired data concerning the MIR microgravity environment can be compared to similar microgravity data acquired on the U.S. Space Shuttle and Spacelab.