This paper presents a brief description of two sensor development programs at the Fluid Mechanics Laboratory, NASA Ames Research Center, one in progress and the other being initiated. The ongoing programs involve digital image velocimetry for velocity field measurements of time-dependent flows. The new program involves advanced acoustic sensors for wind tunnel applications.
The "Firefly " project is developing an infrared remote sensing system to provide near real-time wildland fire information for fire management and suppression. Recent technological advances in several areas now allow the design of an end-to-end, infrared system to map and detect wildland fires. The system components will include an airborne infrared sensor, automatic onboard signal and data processing, telecommunications link, and integration into a ground data terminal. The system will provide improved performance over current systems in terms of increased timeliness of data delivery, quantifiable accuracy, data consistency, reliability, and maintainability. The system will be the next generation of wildland fire mapping and detection system for the United States Forest Service.
A simulation model for testing feature matching and target extraction operations of low altitude radar and laser targeting systems has been developed which operates on a PC. The model includes modules for creating topographic maps with selected roughness and detail and translating the map data into angle and range channel data of the sensor of the targeting system. System modules allow the representation of noise in the data channels of the sensor system. Other modules apply feature and edge extraction algorithms to the simulated channel data. A second system channel is present which can represent a reference channel of external map data for correlation processing or consecutive estimates of the position of the sensor platform.
Fiber optic sensor and communication technology has developed a need for generating electric power at remote locations where conventional methods cannot be used. A fiber optic monolithic array of optical power converter cells was developed to meet this need. Optical power and information can be transported over optical fiber at different wavelengths. A wavelength division multiplexer separates the optical power and the optical information. The optical power is converted to electrical power with a conversion efficiency greater than 25%. The monolithic array consist of twelve individual photodiodes electrically connected in series. This arrangement allows the converter to have an output voltage in excess of eight volts. The diode array is grown using Liquid Phase Epitaxy on semi-insulating InP. A buffer layer of n + InP is first grown. This layer is followed by an undoped GaInAsP active layer. On top of this layer a p doped InP layer is grown to form the p-n junction. A selective chemical etching process is used to produce the monolithic diode array. The array consists of twelve electrically isolated diodes contained within a circular area of 2.5 millimeter diameter. Anti-reflective coating is used to enhance the responsivity of the array. The monolithic diode chip is assembled in a standard T05 package and hermetically sealed with a flat window type cap. The optical fiber is pigtailed to the window cap. A metal sleeve is inserted over the pigtail area to provide a rugged assembly. This particular power converter described is optimized to perform at a wavelength of 1.06 micrometer. High power Nd:YAG lasers are readily available and can effectively be coupled to the optical fiber.
A Nd:YAG Laser Altimeter at 1064 nm for use with low-earth orbiting spacecraft has been designed. The Altimeter has a 17 mJ pulse energy with 7 ns pulse width and is nadir pointing with abeam divergence of 2 mrad. The radiation backscattered from the ocean surface, soil/vegetation and ice is received by a 305 mm diameter f/3.0 Cassegrain telescope with an avalanche photodiode receiver. The elapsed time between the transmitted pulse and the received pulse is converted into the altitude of the spacecraft. Considerations of atmospheric attenuation due to cirrus clouds and aerosols/gases are presented for best- and worst-case conditions. The backscatter coefficients for various cases are also considered and discussed. The signal-to-noise ratios are calculated from all these conditions. The system can give height resolution of 5 metres at 400 km altitude. This data can be used in precision image processing of remotely-sensed data and improvement of the orbital and attitude parameters of the satellite.
A multiple sensor approach to automatically detecting targets viewed by a Forward-Looking Infrared (FLIR) sensor and a range sensor is described. The system developed used sensor-dependent processing to segment, possible targets in the images, measure features for segmented regions, and analyze the single sensor feature information. The post-segmentation target detection problem was that of separating segmented targets from segmented non-targets. Segmented regions in both images were geometrically registered, and a novel multiple sensor feature, called the correspondence feature, was measured. The correspondence feature exploited the observation that targets occupy the same space in both types of image, while segmented non-target regions do not tend to behave in this manner. The detection problem was modeled as a two-class decision problem where the classes were target and non-target. The Bayesian minimum error criterion was used as the class estimation rule. Two single sensor and three multiple sensor approaches to target detection were developed, and their performance compared. Performance improvements are described which resulted from incorporating the correspondence feature information into the class estimation process. Results were tabulated for performance on a data base of real, corresponding FLIR and range images composed of 97 FLIR and 57 range images.
In this paper, the potential benefits in applying sensor fusion to object classification are discussed. A specific example is presented that involves the fusion of multiple band IR and visible light data collected from co-located sensors. Pattern vectors describing the objects were based on features extracted form the simulated target signatures observed within the sensor wavebands individually and also by 'fusing' the multispectral data. The pattern vectors were then subjected to feature analysis using a variety of statistical pattern recognition techniques to determine the relative contribution of each feature to classification performance. Features selected through this process were then used in subsequent classification algorithms which established class boundaries, classified the objects, determined confidence levels, and calculated error probabilities. A neural network paradigm was also applied to the same data set to determine the relative merit of the features and to classify the objects. In particular, a competitive learning algorithm was used. Analysis methods and performance comparisons are presented.
Multi-sensor fusion, at the most basic level, can be cast into a concise, elegant model. Reality demands, however, that this model be modified and augmented. These modifications often result in software systems that are confusing in function and difficult to debug. This problem can be ameliorated by adopting an object-oriented, data-flow programming style. For real-time applications, this approach simplifies data communications and storage management. The concept of object-oriented, data-flow programming is conveniently embodied in the black-board style of software architecture. Blackboard systems allow diverse programs access to a central data base. When the blackboard is described as an object, it can be distributed over multiple processors for real-time applications. Choosing the appropriate parallel architecture is the subject of ongoing research. A prototype blackboard has been constructed to fuse optical image regions and Doppler radar events. The system maintains tracks of simulated targets in real time. The results of this simulation have been used to direct further research on real-time blackboard systems.
This paper presents an application of Distributed Artificial Intelligence (DAI) tools to the data fusion and classification problem. Our approach is to use a blackboard for information management and hypothe-ses formulation. The blackboard is used by the knowledge sources (KSs) for sharing information and posting their hypotheses on, just as experts sitting around a round table would do. The present simulation performs classification of an Aircraft(AC), after identifying it by its features, into disjoint sets (object classes) comprising of the five commercial ACs; Boeing 747, Boeing 707, DC10, Concord and Boeing 727. A situation data base is characterized by experimental data available from the three levels of expert reasoning. Ohio State University ElectroScience Laboratory provided this experimental data. To validate the architecture presented, we employ two KSs for modeling the sensors, aspect angle polarization feature and the ellipticity data. The system has been implemented on Symbolics 3645, under Genera 7.1, in Common LISP.
This paper has focussed on the artificial intelligence applications to command, control, and communications systems/subsystems. The overall objective of this project is to investigate sources of airborne target identification information available from various equipment (airborne, land-based, or space-based) and to develop an automatic target recognition (ATR) system design for integrating the data from these target identification subsystems. The entire project is divided into two phases: Phase I and Phase II. This paper details the results derived from the Phase I study.
A generic mission for an autonomous fire-and-forget brilliant munition is presented and used to identify the functions that an embedded signal processor must perform. Based on these functions and other operational factors (such as weather, larger search areas, lower false alarm rates, and munition maneuverability), the processing loads in bits/second, messages/second, operations/second, and instructions/second are derived. The paper concludes with an evaluation of general implementation issues, such as the requirements for data fusion, distributed and parallel processing architectures, trusted software, and low-cost hardware.
The performance of estimators for nonlinear dynamical systems during a sensor failure can be improved by the use of fusion of multiple sensors. When linear sensor models are used, the effect of feeding back a priori estimates obtained from other sensors is not important. However, if nonlinear sensor models are required, significant performance enhancement can be obtained by using feedback through a central processor. In some cases, simple modifications of the linear estimation schemes can be used even for highly nonlinear sensors. "Outer logic" can be used to detect sen-sor failures and modify the estimation algorithms accordingly. Fusion equations and Monte-Carlo simulation runs of nonlinear radar models fused with infrared models demonstrate the effectiveness of "outer logic" design for sensor failures. Results show the sensor fusion tracking accommodations provided by good "outer logic" design can improve nonlinear dynamical system performance of used multi-sensor suites.
The increase in target signature information inherent in multi-spectral techniques can be used to increase tracker knowledge of current and predicted target states. In addition to the target location, the velocity, acceleration states, and target aspect angle, for example, can be measured. Aspect angle can be used to limit the possible future states of the target (aircraft, for example, have maximum maneuverability along an axis perpendicular to the wing plane.) Aspect angle, coupled with information concerning maximum target acceleration capabilities, can be used to provide an 'envelope' of possible future target locations. This envelope and position prediction are updated with each observation of the target status. A target pattern recognition algorithm has been used to classify target images, estimating both object identification and aspect angle. A modified form of the moments invariant technique, combined with a maximum likelihood classifier, has been developed and tested against imagery acquired with a laboratory image acquisition and analysis system. Simulations of combined motion and aspect angle sensing trackers indicated enhanced performance over systems with kinematic information only. A database of target imagery has been created, and algorithm accuracy as a function of signal-to-noise ratio has been determined. The effects of several pre-processing steps on the electro-optical sensor aspect angle measurement have also been studied. Critical invariance properties have been examined in a limited number of tests.
The paper is concerned with locating a time varying set of entities in a fixed field when the entities are sensed at discrete time instances. At a given time instant a collection of bivariate Gaussian sensor reports is produced, and these reports estimate the location of a subset of the entities present in the field. A database of reports is maintained, which ideally should contain one report for each entity sensed. Whenever a collection of sensor reports is received, the database must be updated to reflect the new information. This updating requires association processing between the database reports and the new sensor reports to determine which pairs of sensor and database reports correspond to the same entity. Algorithms for performing this association processing are presented. Neural network implementation of the algorithms, along with simulation results comparing the approaches are provided.
A fundamental stumbling block - defining a new set of extremely powerful and flexible building blocks with which to build neurocomputers - has recently been removed by Oxford Computer. The result is a family of digital, memory-plus-processor chips, or "Intelligent Memory Chips". These chips combine a high-capacity memory with massively parallel, slice-type processor logic. Unlike common memory chips that only store information, Intelligent Memory Chips perform intensive computations upon matrices they store. As a result, neural networks with fully programmable, signed synaptic weights can be built. The weights are modified as easily, precisely and stably as writing data into ordinary memory chips. Many forms of matrix-vector multipliers, 1- and 2-dimensional convolvers, and Fast Fourier and other transformers can be built as well to implement classical digital signal processing, pattern recognition, adaptive control and 3-dimensional graphics structures. Multiple Intelligent Memory Chips work together to provide the precision, matrix size and performance desired. Extremely large numbers of densely interconnected, artificial neurons in many layers can be provided. Networks easily interface to existing, non-neural machines. Network performance ranging from tens-of-billions to tens-of-trillions of operations per second may be built using current to near term semiconductor technology. Initial chips are being built using 1-micron, silicon CMOS and static RAM technology. The impacts of alternative memory technologies, and improvements in memory and fabrication technology are also discussed.
An approach is presented for mapping a multisensor feature space into a space that is well-ordered for vision tasks. A new statistic, the tie statistic (TS), is introduced for measuring the difference between two probability density functions (pdfs). The TS is related to the Kolmogorov-Smirnov statistic (KS) to demonstrate its ability to decide whether or not a sample came from a known pdf. The TS is used to map the measured feature space into a simplified decision space. In the mapping process, the tie statistic is itself a random variable that has a distribution that can be parametrically approximated by the Beta distribution. The tie mapping process is presented and applied to solve two important vision problems.
A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. It represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.
Advances in our sensor-fusion capabilities are often limited by the lack of appropriate data of scenes and objects acquired by selected sensors. This limitation is severe for sensor-fusion techniques that view objects in space for which observations are costly and require long lead times to plan. An alternative to making the observations is to acquire the data from test results already resident in various collections. This paper reviews the requirements for optical and radar data sets to advance sensor-fusion performance, and shows that these data sets can be identified using existing or developing automated catalogs. Examples are described in which disparate data are brought together to support the development of unique sensor-fusion techniques at a fraction of the cost and time required for observations dedicated to specific sensor-fusion schemes.