Next generation tactical missions cannot rely on threat information or countermeasure actions which depend solely on one sector of the electromagnetic spectrum. Rather, they must rely on an integrated, multi-sensor approach for "engagement information" extraction. Current programs such as Pave Pillar identify fusion as the key to the required positive and unambiguous situation and target assessment. A look at the fusion problem is presented which includes a conceptual description & functional flow.
Merging information available from multisensor views of a scene is a useful approach to target detection and classification. Development of multisensor information fusion techniques using a data base of real imagery from an absolute range laser radar and a corresponding forward looking infrared (FLIR) sensor is underway. Our conceptual approach to multisensor target detection and classification uses senor-dependent segmentation and feature extraction. Information is fused first at the detection level and then within the classifier. We hypothesize that an approach to information fusion based on the mathematical theory of evidence (i.e., evidential reasoning) is a useful method for multisensor object classification. In this paper we summarize an approach to a multisensor object classification system, discuss results of multisensor segmentation algorithm, and present an evidential reasoning-based approach to a multisensor classifier.
In this paper a new approach to the detection and classification of tactical targets using a multifunction laser radar sensor is developed. Targets of interest are tanks, jeeps, trucks, and other vehicles. Doppler images are segmented by developing a new technique which compensates for spurious doppler returns. Relative range images are segmented using an approach based on range gradients. The resultant shapes in the segmented images are then classified using Zernike moment invariants as shape descriptors. Two classification decision rules are implemented: a classical statistical nearest-neighbor approach and a multilayer perceptron architecture. The doppler segmentation algorithm was applied to a set of 180 real sensor images. An accurate segmentation was obtained for 89 percent of the images. The new doppler segmentation proved to be a robust method, and the moment invariants were effective in discriminating the tactical targets. Tanks were classified correctly 86 percent of the time. The most important result of this research is the demonstration of the use of a new information processing architecture for image processing applications.
Multisensor fusion is by integrating several numerical and spatial sensory data to obtain more complete spatial information about the environment. The first step is to gather 3-D spatial information about the environment by partial surface reconstruction from each sensory image data. The second step is to integrate different 3-D spatial information derived from image data, considered as partial and incomplete information, by combination techniques.
The problem of decision fusion in distributed sensors system is considered. A parallel sensor configuration is considered in which sensors monitor a common geographical volume and relay their decisions to a fusion center. The fusion center upon reception of the decision is responsible for fusing them into the final decision. Under conditional independence assumption, it is shown that the optimal test that maximizes the probability of detection for a fixed probability of false alarm consists of a Neyman-Pearson Test at the fusion and Likelihood-Ratio Tests st the sensors. Numerical evaluation of the optimal operating points is computationally intensive. Two computationally efficient suboptimal algorithms have been developed. Numerical results from extensive simulation in Rayleigh and Gaussian channels are presented.
The problem of distributed detection with consulting sensors is formulated and solved when there is a communication cost associated with any exchange of information (consultation) between the sesnors. We consider a system of two sensors, S1 and S2, in which S1 is the primary sensor responsible for the final decision u0, while S2 is a consulting sensor capable of relaying its decision u2 to S1 when requested by Si. In the scenario that is considered, the final decicion uo is based either on the raw data available to S1 only, or, it may, under certain request conditions, also take into account the decision u2 of sensor S2. Random and non-random request schemes have been analysed and numerical results for both request schemes are presented for a slowly fading Rayleigh channel.
Issues concerning the effective integration of multiple sensors into the operation of intelligent systems are presented, and a description of some of the general paradigms and methodologies that address this problem is given. Multisensor integration, and the related notion of multisensor fusion, are defined and distinguished. The potential advantages and problems resulting from the integration of information from multiple sensors are discussed.
This paper presents a survey of the issues and methods for the association and fusion of data from multiple sensors. It will cover three broad areas. The first, data association, refers to the problem of partitioning the sensor data into tracks according to source. We next discuss the techniques used for kinematic and attribute estimation after data association has been performed. In particular, we will discuss the philosophies associated with alternative techniques for attribute and target type estimation, with an emphasis upon the Dempster-Shafer approach. The third major topic discussed is sensor allocation.
An approach is presented for designing multisensor electronic vision systems using information fusion concepts. A random process model of the multisensor scene environment provides a mathematical foundation for fusing information. A complexity metric is introduced to measure the level of difficulty associated with various vision tasks. This complexity metric provides a mathematical basis for fusing information and selecting features to minimize the complexity metric. A major result presented in the paper is a method for utilizing a priori knowledge to fuse an n-dimensional feature vector X = (X1, X2, ..., Xn) into a single feature Y while retaining the same complexity. A fusing theorem is presented that defines the class of fusing functions that retains the minimum complexity.
An alternate multimode sensor fusion scheme is treated. The concept is designed to acquire and engage high value relocatable targets in a lock-on-after-launch sequence. The approach uses statistical decision concepts to determine the authority to be assigned to each mode in the acquisition sequence voting and decision process. Statistical target classification and recognition in the engagement sequence is accomplished through variable length feature vectors set by adaptive logics. The approach uses multiple decision for acquisition and classification, in the number of spaces selected, is adaptively weighted and adjusted. The scheme uses type of climate -- arctic, temperate, desert, and equatorial -- diurnal effects --- time of day -- type of background, type of countermeasures present -- signature suppresssion or obscuration, false target decoy or electronic warfare -- and other factors to make these selections. The approach is discussed in simple terms. Voids and deficiencies in the statistical data base used to train such algorithms is discussed. The approach is being developed to engage deep battle targets such as surface-to-surface missile systems, air defense units and self-propelled artillery.
The Distributed Sensor Architecture (DSA) has been developed to couple knowledge-based processing with integrated sensors technology to provide coherent and efficient treatment of information generated by multiple sensors. In this architecture multiple smart sensors are serviced by a knowledge-based sensor supervisor to process sensor-related data as an integrated sensor group. Multiple sensor groups can be combined to form a reconfigurable, fault tolerant sensor fusion framework. The role and topology of this architecture are discussed. An example application of DSA sensor data fusion is presented.
The spectral analyzer and direction indicator (SADI) system is an electro-optic sensor that can determine both the spectral content and direction of the source of a single pulse of radiated energy. In its simplest form, it can determine the direction of the source in one dimension. Directional information in two dimensions requires the addition of some electronics and, in some configurations, optics. Various degrees of electronic processing sophistication may yield the pulse shape, and its spectral and spatial distribution.
A multichannel off-axis passive bidirectional fiber optic rotary joint has been built with a derotating transmissive intermediate optical component (IOC) comprised of a nearly coherent optical fiber bundle with lensed transmitters and receivers. This proof-of-concept model has been tested for insertion loss and rotational variation. Insertion loss measurements vary from 3.2 to 5.9 dB and 4.0 to 9.9 dB at 1300 and 1550 nm, respectively. The variation in insertion loss values is due to several broken fiber paths inside the IOC. Improvements in manufacturing and packaging the IOC will allow the performance to approach the design goal of 3 to 5 dB insertion loss with less than 0.5 dB rotational variation at wavelengths of 600-1550 nm. Multichannel, off-axis, passive, and bidirectional light transmission capabilities over a broad range of wavelengths are important considerations to the system designer who must transmit multiple fiber optic remote sensing or communication channels across a rotary interface.
A hierarchical and adaptive control scheme of multisensor systems is introduced for improvement of image understanding, correspondence (registration) problem and sensory data fusion. The neural network approach provides adaptiveness and learning to not only the control level of the overall system architecture, but also the processing level of the image frames. Furthermore, our improved sensing capability enhances the performance of large, complex, integrated sensor-driven robotic systems.
We present a neural network model for sensory fusion based on the design of the visual/acoustic target localiza-tion system of the barn owl. This system adaptively fuses its separate visual and acoustic representations of object position into a single joint representation used for head orientation. The building block in this system, as in much of the brain, is the neuronal map. Neuronal maps are large arrays of locally interconnected neurons that represent information in a map-like form, that is, parameter values are systematically encoded by the position of neural activation in the array. The computational load is distributed to a hierarchy of maps, and the computation is performed in stages by transforming the representation from map to map via the geometry of the projections between the maps and the local interactions within the maps. For example, azimuthal position is computed from the frequency and binaural phase information encoded in the signals of the acoustic sensors, while elevation is computed in a separate stream using binaural intensity information. These separate streams are merged in their joint projection onto the external nucleus of the inferior colliculus, a two dimensional array of cells which contains a map of acoustic space. This acoustic map, and the visual map of the retina, jointly project onto the optic tectum, creating a fused visual/acoustic representation of position in space that is used for object localization. In this paper we describe our mathematical model of the stage of visual/acoustic fusion in the optic tectum. The model assumes that the acoustic projection from the external nucleus onto the tectum is roughly topographic and one-to-many, while the visual projection from the retina onto the tectum is topographic and one-to-one. A simple process of self-organization alters the strengths of the acoustic connections, effectively forming a focused beam of strong acoustic connections whose inputs are coincident with the visual inputs. Computer simulations demonstrate how this mechanism can account for the existing experimental data on adaptive fusion and makes sharp predictions for experimental test.
An approach is presented to estimate and minimize registration error between an optical or infrared imaging sensor and a nonimaging radar in a coboresighted multi-sensor object acquisition system. Previous efforts have been concentrated on analyzing the registration error between the sensed image and the reference image or between the same type of sensor signals. In this paper, models are developed for the centroid estimation error of an object from each sensor and for the registration error. The mean-square registration error is defined as a function of the sensor parameters and characteristics of the imaging sensor and the centroid uncertainty and pointing error of a monopulse radar. The optimal calibration range is estimated in terms of the sensor parameters and calibration object dimension.
This paper describes a model for integrating information acquisition functions into a response planner within a tactical self-defense system. This model may be used in defining requirements in such applications for sensor systems and for associated processing and control functions. The goal of information acquisition in a self-defense system is generally not that of achieving the best possible estimate of the threat environment; but rather to provide resolution of that environment sufficient to support response decisions. We model the information acquisition problem as that of achieving a partition among possible world states such that the final partition maps into the system's repertoire of possible responses.
Sensor fusion techniques are currently being investigated for potential application to a variety of smart weapon systems. Primary sensor candidates include active and passive millimeter wave as well as active and passive infrared configurations. Improved overall dual-mode sensor performance must be carefully traded off with hardware complexity, packaging challenges, and increased costs associated with the sensor fusion process. An additional factor of significance is the selection of sensor geometry, side-by-side, or common aperture.
System performance and reliability, given a sensor failure, can be improved by the use of multiple sensors of different types. A failure may be due to loss of information of any type. In order to use multiple sensors, it is necessary to perform local estimation, combine local estimations in a central processor, and to detect, isolate, and accommodate sensor failures. Methods for attacking these problems are generally known as sensor fusion. The general sensor fusion equations are set up showing how to construct local estimators, the central processing (fusing) algorithm, and "outer logic" for dealing with sensor failure. In addition application of the theory is demonstrated through simulation, with generic sensors employing non-linear models. The results show the detection, isolation and accommodation of sensor failures. Based upon this study, the concept of sensor fusion shows promise in making significant improvement in many systems employing multiple sensors.
A method to improve the locating accuracy of a multi-detector system was proposed. In order to appraise the method, the target, the corresponding irradiance distribution function and the whole process for information extraction were simulated by the computer. The result shows that in some circumstance the locating accuracy will be improved in a great deal.
A sensor designed to aid in the docking of spacecraft is under development for NASA by Cubic Corporation. This sensor uses three lasers to track the prospective target and to determine the required parameters necessary to calculate the ideal approach maneuver. The system combines the inputs from several sensors, including polarization, continuous tone DME, and a CID to achieve the desired results.
Neural network models have attracted the attention of many researchers in the pattern recognition domain. These models possess many interesting computational properties which include content addressable memory, automatic generalization, and the ability to modify their processing (learn) based on their input data. They also promise extremely fast implementations if they can be realized in special purpose hardware. Such special purpose implementations, due to the limits of integration, imply a finite number of neurones for any one system . Under such constraints, the construction of large neural network systems implies parallelism among sub-modules. This paper presents an architecture based on fusing the outputs of several independent neural network systems in order to define a single aggregate system. The system presented here recognizes handwritten ZIP code digits taken from pieces of United States Postal Service (USPS) mail. The overall system is composed of several sub-modules, each of which could be realized in a neural network of reasonable size. Parts of the system have been shown to achieve up to 75% accuracy processing digits at the rate of about one digit per second (real time). Currently, the neural net-work paradigm on which the system is based is being simulated on a serial machine (a Symbolics 3600 series lisp machine). In order to keep the total time of the system within a reasonable limit of -100 time steps, each network module has been limited to a few (<10) time steps. Current work involves the definition of other modules whose evidence will be combined with the described module. The gross system architecture is designed to integrate multiple evidence sources. The overall goal is to have both neural network and symbol processing paradigms in a single system.
The effects of transmission delay and channel errors on the performance of a distributed fusion system is studied. At a given time instant, the decisions from some sensors may not be available at the fusion center due to the transmission delays. Assuming that the fusion center has to make a decision based on the data from the rest of the sensors, provided that at least one peripheral decision has been received, it is shown that the optimal decision rule that maximizes the probability of detection for fixed probability of false alarm at the fusion center is the Neyman-Pearson test at the fusion center and the sensors as well. Furthermore, it is shown that, in the case of noisy channels, the decision made by each sensor depends on the reliability of the corresponding transmission channel. Moreover, the probability of false alarm at the fusion is restricted by the channel errors. For a given decision rule, the probability of any channel being in error must be kept at a certain level in order to achieve a desired probability of false alarm at the fusion. A suboptimal, but very near-to-optimal, computationally efficient algorithm is developed to solve for the sensor and fusion thresholds sequentially. Numerical results are provided to demonstrate the closeness of the solutions obtained by the suboptimal algorithm to the optimal solutions.
The problem of estimation and filtering in a distributed sensor environment is considered. The sensors obtain measurements about target trajectories at random times which transmit to the fusion center. The measurements arrive at the fusion with random delays which are due to queueing delays, and random delays in the transmission time as well as in the propagation time (sensor position may be unknown or changing with respect to the fusion). The fusion generates estimates of the target tracks using the received measurements. The measurements are received from the sensors at random times, they may have unknown time-origin and may arrive out of sequence. Optimal filters for the estimation problem of target tracks based on measurements of uncertain origin received by the fusion at random times and out of sequence have been derived for the cases of random sampling, random delay, and both random sampling and random delay. It is shown that the optimal filters constitute an extension to the Kalman Filter to account for the uncertainty involved with the data time-origin.
The derivation of sensor fusion algorithms is presented with emphasis on detection and estimation of radar type targets. Theoretical expressions are developed in a form which provide the applications engineer with the fundamentals necessary for implementation of these algorithms into systems constituting distributed sensors. The expressions lend themselves to using knowledge and rule based methods so that a priori and learned information about the overall scenario can be used to reduce uncertainties and thereby efficiently direct signal energy toward optimizing system performance. Various surveillance situations are considered and accounted for in the development of the algorithms. These include Bayesian and Neyman-Pearson detection, sequential detection, multiple target situations, estimation, colored noise such as jamming, Constant False Alarm Rate (CFAR), and multiple background estimation. Optimization of mutual information transfer through the distributed sensors is also treated. Where most investigators focus on optimizing the sensors given the fusion rule, our development explores methods for optimizing the fusion rule given the sensor criteria. Some procedures are also presented for the mutual or global optimization of both the sensors and the fusion center. The effects of band width and channel capacity constraints between sensors and fusion center are taken into account in the development. Numerical results are presented which illustrate the improvements obtained from the use of multiple sensors with various fusion rules with respect to the performance of the single sensor [1-3].
In this paper the performance of a digital processor, referred to as the order statistic (OS) filter, is analyzed as a noncoherent processor in a detection system. The statistical description of the output of the OS filter is presented in terms of the filter parameters and the statistics of the input. The mathematical form of this description, for the case of independent and identically distributed inputs, is used to develop general input output relations of the filter. These relationships are used to indicate the critical factors that affect the performance of the OS filter, and a quantitative expression is presented to determine the rank of the OS filter necessary for optimal detection performance. It is shown that the OS filter with extreme ranks (minimum and maximum detectors) performs well in situations where a significant skewness difference exists between the different classes of input signals. As the skewness difference between the classes decreases, the performance of the OS filter with extreme ranks degrades, while the performance of the OS filter for intermediate ranks is robust over this change.
A method is proposed to exploit simultaneous, co-registered FLIR and TV images of isolated objects against relatively bland backgrounds to improve recognition of those objects. The method uses edges extracted from the TV imagery to segment objects in the FLIR imagery. A binary tree classifier is shown to perform significantly better with objects defined in this manner than with objects extracted separately from the FLIR or TV images, or with a feature level fusion scheme which combines features of separately extracted objects. The structure of the tree indicates that the cross-segmented objects are simply ordered in feature space. An argument is presented that this sensor fusion scheme is natural in terms of the known or-ganization of neural vision systems. Generalizations to other sensor types and fusion schemes should be considered, since it has been shown that co-registered imagery can be exploited to improve recognition at no additional computational cost.
A constraint-based approach to uniformly combining information from multiple representations and sources of sensory data is described. The approach is important to research in intermediate grouping, knowledge-based model matching, and information fusion. The techniques presented extend the capabilities of an earlier system that applied constraints to attributes of single types of extracted image events called tokens. Relational measures are defined between symbolic tokens so that sets of tokens across representations can he selected and grouped on the basis of constraint functions applied to these relational measures. Since typical low-level representations involve hundreds or thousands of tokens in each representation, even binary relational measures can involve very large numbers of token pairs. Control strategies for ordering and filtering tokens, based upon constraints on token attributes and token relationships, can be formed to reduce the computation involved in producing token aggregations. The system is demonstrated using region and line data and an associated set of relational measures. The approach can be naturally extended to include tokens extracted from motion, stereo, and range data.
Neural networks are ideally suited for sensing images and waveforms, processing them into intermediate levels of representation and outputing identification and/or characteristics of the sensed object. These networks can solve problems that conventional algorithms haven't and already in several cases this new technology has performed better than humans (e.g. sonar signal classification). A brief review of where autonomous agents may use neural networks and their learning algorithms is presented. A high yielding area is seen in the self-repair of damaged or faulted components. Architectures are proposed for implementing self-repairing sensor and identification systems aboard autonomous agents. One example is presented for a system which identifies visual objects. This system has four layers of massively connected simple parallel processors. Each connection has a weight attribute and the collected assignment of weights in a layer determines what function the layer will perform. The first layer (the imput layer) is simply the pixel detector layer. The second layer has eight sublayers which are sensitive to short line segments in eight different orientations. The third layer detects elementary combinations of the lower lines such as oriented corners or curve segments. The fourth layer has one sublayer for each macroscopic object to be identified which may be fused with a pinpoint location sensor. The crux of using reconfiguration in this type of sensor is that when one (or several) of the units or detectors become inoperative then neighboring detectors in that layer may be used to reprogram the weights connecting surviving units to restore functionality. This strategy takes advantage of the redundancy of parallel processors present in most types of neural networks. Alternatively a properly functioning agent may teach the injured agent or competitive learning for repairing middle processing layers may be utilized when an operative after-the-fact sensor is available for teaching the output layer.
We describe an approach which facilitates and makes explicit the organization of the knowledge necessary to map multisensor system requirements onto an appropriate assembly of algorithms, processors, sensors, and actuators. We have previously introduced the Multisensor Kernel System and Logical Sensor Specifications as a means for high-level specification of multisensor systems. The main goals of such a characterization are: to develop a coherent treatment of multisensor information, to allow system reconfiguration for both fault tolerance and dynamic response to environmental conditions, and to permit the explicit description of control. In this paper we show how Logical Sensors can be incorporated into an object-based approach to the organization of multisensor systems. In particular, we discuss: * a multisensor knowledge base, * a sensor specification scheme, and * a multisensor simulation environment.