When 2-D sampled fields of view are generated by staring or scanning arrays of IR detectors, the motion of an unresolved point object is usually measured in terms of changes in position estimates in a series of two or more successive fields, or "frames" of sampled data. However, when motion is pronounced enough to affect data in a single frame, velocity can be estimated from its effect on data. Investigation of motion on the order of 0.10 detector subtense (DS) per detector integration time (DIT) has shown that velocity can be estimated jointly with position to within a small percentage of the actual velocity at a signal to noise ratio (SNR) of 10.0. A linear recursive maximum liklihood estimator, used for position and intensity, is described and extended to encompass the joint estimation of velocity from a single frame of data. Estimation precision is demonstrated by a Monte Carlo approach based on simulation of the estimation process.
This paper treats the problem of source dynamic motion evaluation in underwater applications using recursive weighted least squares estimation. The issue of compensating for underwater motion effects arises in a number of areas of current interest such as control and operations of autonomous remotely operated vehicles, underwater seismic exploration, and buoy wave data analysis. Earlier treatments of the problem relied on frequency response methods and Kalman filtering. The present paper discusses the compensation problem using an alternative discrete model of the process and proposes use of the recursive weighted least squares algorithm for its solution. The algorithm is simpler than Kalman filtering in terms of the required knowledge of noise statistics and provides an attractive alternative to Kalman Filtering. Emphasis is given practical implementation using parallel processing and systolic array methodologies.
In this paper, we investigated the extraction of ground targets from background and clutter using CO2 LASER RADAR (LADAR) imagery. Target extraction or segmentation is a critical step in the Automatic Target Recognition (ATR) process. In this paper, we investigated the utility of various types of heuristics such as the diversity in the expected reflectance, range variations and size between military targets and natural objects to effectively extract military targets. We devised a computational efficient recursive algorithm based upon state-space technique to extract target from background. The recursive nature of the algorithm and its parallel concurrent processing will allow for real-time implementation. Computer simulation results are also presented.
This paper presents Area Moving Target Indication (Area MTI) as a signal processing technique for the detection of slow moving or tangentially moving targets. The work was performed by the Rome Air Development Center (RADC) in their study of the Bird/Aircraft Strike Hazard (BASH) problem at Dover AFB. A radar for bird hazard warnings was necessary to cope with the large numbers of birds wintering at several wildlife refuges located near the base. Such a radar presents a formidable surveillance/tracking problem, and Area MTI has the potential to provide a solution to this problem. Although the improvement obtained was less than the ultimate rejection capability of conventional MTI, the Area MTI did result in a relative enhancement for the detection of slow and tangential moving targets. Additional techniques to improve performance are currently being investigated, for example, integration of conventional MTI and Area MTI. It is expected this technique will enhance the detection of target returns suppressed by conventional MTI while retaining Area MTI's high "average" improvement factor.
We present an algorithm for thresholding that detects potential targets in various kinds of noise that exist in Infrared video images. This algorithm was developed for a real time microprocessor based system that uses a scanning focal plane array to generate the input video image. Our objective is to describe the design and development of a thresholding method that distinguishes the target signal from low frequency scan noise or shading, and high frequency detector noise. The threshold calculation technique chosen, comprises a base 16 pixel Running Average (RA) to which is added a "False Alarm Rate" factor (Alpha) applied to a "Sum Absolute Difference" (SAD) noise measurement. A unique threshold value is calculated for each pixel within the field of view. Pixels with intensity values above their corresponding threshold limits are digitized and stored for further processing. Additionally, we will discuss the algorithm requirements, constraints and performance.
We present a methodology for the tracking of the centroid of small extended objects using measurements from a forward looking infrared imaging sensor. The statistical characterization of the centroid of a frame as a noisy linear measurement of the centroid of the target is obtained. The off set measurement noise is shown to be autocorrelated. State variable models and the corresponding filters for tracking the target centroid with these measurements are presented. Their performances are compared and it is verified with simulations that the filter that models the autocorrelated measurement noise provides the best performance.
The order statistic (OS) filter is a digital processor, whose performance is equivalent to the binary integrator used in radar signal processing. This paper presents the sort function analysis for the application of OS filter that aid in the application and optimization of the filter. It is shown that the OS filter can detect target signals in the presence of clutter for low signal-to-ratios (SCR), provided there exists a quantile regions where there is a significant difference in the statistical distributions of the target and clutter. Furthermore, the analysis indicates that for low SCR the OS filter has a superior performance over that of the n-pulse integrator when a significant difference exists between the clutter and target distributions for only a limited number of quantile regions. Examples of these detection properties are illustrated with results from a computer simulation.
The problem of estimating the position of an optical point target by using a staring detector array is analyzed. Based on assumptions that the target's point-spread-function (PSF) is Gaussian, the detector response is uniform over the detector surface, and the signal and noise at the output of each detector are both Poisson-distributed, the Cramer-Rao lower bound (CRB) on the precision of intensity/position estimation is obtained. Using the CRB as a performance indicator, the following factors which influence the estimation performance of an isolated target are examined: target's intra-pixel position, signal-to-noise ratio, size of the detector relative to the PSF, size of the detector array, and dead space of the array. The CRB analysis is then extended to address the CSO (closely spaced objects) resolution issue. The degradation of the estimation performance due to the CSO interference is determined as a function of the separation and the separation orientation.
An Infrared Search and Track (IRST) system may consist of a target detection pre-processor and a higher-level processor which evaluates candidate detections and forms tracking hypotheses. The detection signal processing must evaluate large numbers (tens of thousands) of pixels at the sensor frame rate and determine a small number (tens) of candidate detections. To be effective, detection processing must be able to detect targets at long ranges and extract targets from background clutter. While the higher-level process will be able to reject some false alarms and occasionally fill in missed targets, system performance will be critically degraded by poor detection capabilities. For most applications, such as airborne reconnaissance, size and weight considerations impose significant limitations on allowable computational complexity. Thus, the detection processing must be as accurate as possible while remaining fast and simple.
A methodology is presented and algorithms designed which estimate the total intensity from sources which are extended relative to a sensor detector element. For the methodology described, algorithm processing is invariant to source size and shape. Processing of the sensor signals in the frequency domain is shown to be straightforward and designed to handle both AC and DC coupled electronics. Algorithm design is based upon processing the time axis central slice of the 2-D Fourier Transform of thresholded detector samples.
This paper presents the adaptation and performance evaluation of the Wilcoxon and Mann-Whitney nonparametric detectors for point targets in infrared clutter backgrounds. These detectors are designed to detect a level shift in a set of M samples containing interference which has an incompletely known distribution. We present a transformation based on the Walsh transformation which allows us to apply the Wilcoxon or Mann-Whitney detectors to the case of a target of small extent imbedded in a large number of background samples. We include the formulation of the two nonparametric detectors along with analytical approximations that can be used to calculate performance predictions in terms of probability of detection and false alarm. Last, we present simulation results comparing the nonparametric detectors with an adaptive linear detector. Performance comparisons are shown for simulations against scenes consisting of recorded infrared clutter video as well as a synthetic white noise scene.
A critical element in the Time Dependent Processing chain for scanning infrared sensors is electronic gamma circumvention. The most successful approach to gamma circumvention to date is a two stage algorithm, Spike Adaptive Time Delay and Integration (SATDI). This heuristic approach makes no assumptions about gamma-induced noise except that it is an additive corruption of the true signal. If, however, one can model the form of the gamma-induced noise distribution, it is possible to design a maximum likelihood estimation model which explicitly utilizes the parametric form of the noise. Such a model will, in general, be more efficient than a heuristic one, since it contains more information about the noise process. The parametric form studied in this paper is an exponential distribution, λe-λr, where r is the received signal. This distribution is a reasonable approximation to the observed gamma spectrum in infrared detectors. A maximum likelihood estimation equation corresponding to this noise distribution is derived, and its performance is compared to SATDI. It is found that, for a bright gamma background, either much better detection for a fixed false alarm rate, or many fewer false alarms for a fixed detection probability, may be achieved using the maximum likelihood estimator as compared to SATDI.
This paper presents recent work on the application of the velocity filter concept to target acquisition and track initiation in passive electro-optical surveillance systems for strategic defense. The problem of rapidly initiating angle-only tracks for multiple clusters of unresolved objects observed by a single passive sensor is emphasized. Simulated midcourse threat data are used to demonstrate the approach.
Gabriel Frenkel: It seems to me that since the audience consists of scientists and engineers working on so many diverse problems, it is appropriate that we begin our discussion with the question of where this particular class of problem arises. Who would like to start?
The problem of track initiation in the exoatmospheric ballistic missile defense scenario has been studied before, assuming a scanning LWIR sensor. This paper is concerned with the same problem, but under a more difficult sensor configuration, namely, a set of mid-altitude satellite sensors. To handle conditions of high target density and poor observability, a new initiation procedure is used for the iterative-least-square (ILS) filter, and the filter itself is modified to accept a-priori state covariance. The target density is effectively reduced by calling upon an edge tracker initially, which forms track files of edges of the clusters. Track initiation is then accomplished by referencing to those edge track files, assuming that targets in the same cluster travel in parallel. Tracks initiated by two sensors are merged to provide precise state estimates to the extended Kalman filter that is used to carry out the track maintenance task.
Multiple hypotheses tracking is a very effective approach to the data association problem of multiple target tracking. However, a major present deficiency of this method is the lack of a consistent and concise way to present the resulting track information to a user or to sensor manager routines. This paper develops a new technique, Coordinated Presentation, which encompasses the interaction and degree of confidence of the hypotheses to yield a concise and meaningful presentation of target tracks. The Coordinated Presentation method abstracts information from all of the significant current hypotheses to yield an optimal estimate of the number, centroid and extent of a group of unresolvable target tracks or the state (position) and variance for resolvable separate target tracks. As more observations are received and target tracks of a group become resolvable the transition to separate target tracks occurs automatically. The paper presents results showing the operation of the method.
Pattern matching is a well-known technique for associating successive measurements of many, closely spaced objects where there is small relative motion of the objects between measurements. This paper examines the effectiveness of two-frame pattern matching techniques, including the Munkres algorithm and a "subgroup matcher". Performance and limitations of these algorithms in the presence of spurious signals and non-random internal motions is studied. The subgroup matcher is shown to be much more robust to measurement anomalies.
A new algorithm/architecture is described that tracks objects in a dense multiple target environment, with clutter, false alarms, non-unity probability of detection and occasionally unresolved returns. The algorithm is non-recursive, being a modification of a multiple hypothesis tracker. The architecture is a generalized parallel prefix network, which exploits recent advances in semiconductor technology. The algorithm also exploits two recent advances in parallel sorting and parallel string matching.
The standard Kalman filter requires that the statistical characteristics of system signal and noise are completely known. In practice, this is almost impossible to achieve. Numerous adaptive techniques have been developed to compensate for inexact system modeling. While some are not good enough, others are ad hoc approaches requiring substantial computer resources. A novel adaptive technique is proposed by adding to the standard Kalman estimator, an integral term which provides additional smoothing effect and design flexibility. Optimal structures are derived by using the innovation method for continuous and discrete data Gaussian process models with linear dynamics. The proportional integral estimator (PIE) is simple to implement, but by adjusting contributions from the proportional linear term and the integral term of filter residual, it provides flexible adaptive features to suit design requirements, such as robustness to parameter variation and maneuvering target tracking. An application to a tracking system is presented and the behavior of error covariance matrix is examined. The example included for comparison of the standard Kalman filter and the PIE, indicates that while the results obtained by using the two filters are comparatively close, significant improvement is observed in response time and noise smoothing capability.
The Joint Probabilistic Data Association Filter (JPDAF) has been successfully used for tracking multiple targets in the presence of source uncertainty and measurement inaccuracy. In this field and using this technic, the problem of multiple manoeuvring target tracking will be considered. The aim of this paper concerns the extension of the Adaptative Probabilistic Data Association Filter (APDAF) in reference to monotarget tracking to the multitarget tracking problem. This Adaptative Joint Probabilistic Data Association Filter (AJPDAF) estimates the state of each target in a cluttered environment when the noise statistics are unknown. Simulation results on multiple manoeuvring target tracking using simulation data are presented.
This paper is concerned with the performance evaluation of multiple-target tracking systems, in particular, with evaluating the quality of tracks as the outputs of such systems. Very simple analytic functions are developed to relate key tracking environment parameters to a selected tracking performance measure, i.e., track purity, which is an important quantification of the tracking output quality when single-frame based target classification is not possible. The method for predicting track purity is based on a simple mathematical model for data correlation performance within a single data set. Several Monte Carlo simulations are performed to verify the applicability of the analytic functions.
Aerojet has developed a fault tolerant, distributed tracker under the Advanced Onboard Signal Processor (AOSP) Brassboard Demonstration Program for Rome Air Development Center (RADC). The AOSP Brassboard is a fault tolerant, loosely coupled, distributed network of microprocessors. The tracker function correlates several scans of Representative Return data (the input data comes from a scanning IR sensor.) to form tracks which are analyzed to estimate various parameters for the events being tracked. The distributed nature of the AOSP architecture and the stressing performance requirements of the application necessitated a detailed architectural design phase. The key part of the architectural design was the partitioning, which consisted of finding an acceptable allocation of application functions to processors. It was necessary to partition the tracker into several tasks because its stressing memory and throughput requirements could not be satisfied by a single processor from the AOSP Brassboard. The partitioning process had to take into account the extra processing and memory required by fault tolerance. This paper describes the approach to fault tolerance in the distributed tracker used in the AOSP demonstration program.
This paper is concerned with the problem of associating measurements from multiple passive line-of-sight only sensors in the presence of clutter, missed detections and unknown number of targets. The measurement-target association problem is formulated as one of maximizing the joint likelihood function of the measurement partition. Mathematically, this formulation of the data association problem leads to a generalization of the multi-dimensional matching problem, which is known to be NP-complete ( of non polynomial complexity ) when the number of sensors S ≥ 3. Suboptimal algorithms are therefore of considerable importance. In this paper we present two suboptimal algorithms - a backtracking algorithm with a complexity of order 0(M in M), where M is the number of possible measurement-target associations, and a relaxation algorithm that successively solves a series of generalized two-dimensional assignment problems, with the worst case complexity of 0(3k n3 ), where n is the number of reports from each sensor, and k is the number of relaxation iterations. The performance of the backtracking algorithm is guaranteed to be much better than the "row-column" heuristic, which has complexity of 0(M). A nice feature of the relaxation approach is that the resulting primal and dual solutions provide a measure of how close the solution is to the (perhaps unknowable) optimal solution in terms of the duality gap. For the passive sensor data association problem, the duality gaps are typically less than 1%. We present comparisons between the two algorithms in terms of performance and time complexity in the context of passive sensor data-association problem involving three sensors. Both of the algorithms are tested on a wide variety of scenarios involving a wide range of target densities, measurement accuracy, and false alarm and missed-detection probabilities.
This paper discusses the unique multiple target tracking problems associated with a dense threat environment or low observables. Critical algorithm development issues are identified and the areas in which existing techniques must be extended are discussed. An overview of proposed architectures that have been identified to address these problems is presented. The concept of birth-to-death tracking is presented and candidate techniques for tracking and data association for use within this concept are discussed. The paper also outlines threat characteristics and the resultant implications on the choice of data association methods are presented. A major issue addressed in this paper is the manner in which the tracking functions are to be partitioned. To perform accurate data association, it is preferable that data from the entire scene be processed simultaneously. However, due to physical limitations, it is necessary that the processing be widely distributed. One approach presented provides central-level tracking accuracy while also distributing the processing across multiple platforms. The paper closes with an assessment of existing multiple target tracking technology as applied to space-based surveillance programs and proposes an approach for future development.
The Prototype Information Correlation Exploitation System (PICES) is an integrated tracking, data correlation, and multi-sensor data fusion system that automatically generates scene hypotheses (possibilities due to sensor information ambiguities) and ranks them on the basis of all information available to the system at a given time. This paper describes PICES features and presents results utilizing various sensor information. We also discuss PICES prototype microcomputer implementation and future program direction.
Impetus to improve airborne surveillance systems derives from two sources: from evolving threats against which there are growing sensor deficiencies; and from evolving technology which offers new approaches or improved implementations for the surveillance problem. Sensor fusion is one such technological opportunity. Proposed surveillance systems, however, must not only satisfy certain technical requirements on fusion performance but must examine many other performance issues as well, ultimately culminating in an estimate of life cycle cost (LCC) to meet a specified mission objective. It is within this context that we propose to discuss sensor fusion. We will discuss in some detail the decomposition of the performance assessment problem and the resultant implications in terms of component modeling and interfaces. This discussion will start at the surveillance requirement stage; then look successively at sensor suite optimization; sensor modeling (both in terms of measurement capability and operational utilization); sensor fusion (employing a multiple hypothesis approach with adaptive resource allocation); surveillance effectiveness evaluation; and finally life cycle costing.
The task of track initiation by passive sensors grows more difficult as the density of targets and number of sensors viewing them increases. This is especially true of the sensor-to-sensor association problem for scanning sensors whose target detections cannot, in general, be made simultaneous. A variety of track initiation algorithms is discussed, and 16 different algorithms for performing fusion of two-dimensional (2-D) tracks into 3-D tracks are defined and tested. These 16 algorithms result from combining four separate cost functions and four separate allocation algorithms. Most of these algorithms employ stereo polar coordinates, defined here, for both 2-D and 3-D tracks. Their use simplifies the association phase, and initialization phase, of track initiation. The purpose here is to test a variety of algorithms, so that both their performance and their speed are explored and can be used in algorithm selection. A track initiation algorithm of clustered objects viewed by two or three sensors is also discussed.
A target-oriented method for sensor data fusion is being developed to provide practical, automated, multi-sensor tracking in multiple-target environments of any size. To provide computational tractability for such a system indicates the exploitation of inherent parallelism to the maximum. This method employs an object-oriented system architecture to partition the task in the way that exploits the inherent parallelism to the maximum. Partitioning by target track offers the greatest scope for processing concurrency, and forms the basis of the design. The approach involves the allocation of independent, asynchronous logical processes to track individual targets on a one-to-one basis. The tracking processes each contain identical track initiation, data correlation and tracking algorithms, and are entirely independent of each other. Incoming sensor data is assessed, by all the tracking processes independently and concurrently, for reference to their target tracks, and, if so, is then used to update the track. Dedicated data routing processes are used to optimise the data throughput. An important feature of the target-oriented architecture is that system growth is easily achieved, by the straightforward replication of the individual tracking processes. Such growth demands only a linear increase in the number of such processes with the size of the en-vironment. This structure, being inherently modular, also lends itself to a dispersed, multi-platform implementation. The architecture has been extended to track clusters of targets as well as individuals, where all the cluster members may or may not be individually resolvable. Under contract to the US DoD SDIO, this concept has been refined to perform multi-target, multi-sensor track extraction under a variety of conditions, particularly when some or all of the sensors are devoid of an inherent tracking capability.
This paper describes and quantifies the benefits of soft-decision sensors and probabilistic data fusion relative to hard-decision sensors and nonnumerical (e.g., Boolean logic) data fusion. Hard sensors measure signals and return yes/no responses (declarations) based upon decision criteria within each sensor. Soft sensors return a measure of confidence (such as a probability) that quantifies the uncertainty in detection and/or identification. These soft responses are integrated via a fusion algorithm. The composite confidence derived by fusion from all sensors is compared against a single decision criterion to make the detection/identification declaration. A soft sensor suite with Bayesian fusion is shown to provide a 30 percent increase in range at identification. This occurs only when the probabilistic uncertainty regions for sensor measurements overlap. This means more than one sensor is providing probablistic measurements at a given range for the particular target parameters.