The DARPA MUSIC program is presently collecting data to support multi-spectral infrared target detection in clutter.
The plan of the MUSIC program is discussed first, followed by the theoretical basis of multi-spectral and recursive
moving target indicator (RMTI) processing. An example using data from the MUSIC sensor is presented. In this
example spectral-spatial processing of two bands is compared to registration and temporal processing of a single
Terrain images taken from an unstabilized platform by a line CCD
camera are often distorted as a result of angular motion of the line of
sight (LOS). Distortions appear as stretching, shrinking and twisting
of an image. Distorted images can be corrected by image processing
techniques. Reconstruction of distorted images is based solely on
information contained in the image and does not include inertial
measurements of the LOS angular motion.
The purpose of this work was to provide a useable image of the
scanned terrain, containing minimal distortions, to enable the user
correct orientation in the image, and the detection and recognition of
viewed objects. The reconstruction algorithm is based on crosscorrelation
between sequential image lines.
Correlation coefficients calculated for each sampled image line
serve to determine the three components of the displacement vector,
namely, lateral shift, advance rate and rotation of image lines.
Reversing of the camera scanning direction is detected by locating an
image line, which constitutes the symmetry line of a mirror image in
the received image.
The performance level of the algorithm was tested on simulated
distorted images, and real images taken by a line CCD camera. The
distortion levels at which the algorithm was tested were: Lateral shift
of sequential image lines: 0 -< 3 pixels/line, with total accumulated
shift of up to 250 pixels; sampling rate density of image lines: 0.5 -<
10 lines/pixel; rotation rate of image line: 0 -< 0.05 degrees/line,
with total accumulation of up to 10 degrees.
Reconstruction errors of 5-7% of a pixel in lateral shift
correction, 15% in line sampling rate determination and 2-8 10e-2
degrees in line rotation calculation were obtained. The ability to
successfully reconstruct distorted images with various distortion
levels within the expected dynamic range of real systems was
demonstrated. The algorithm was also shown to perform successfully
under conditions of different terrain textures.
A new approach is described for detecting small targets in natural background clutter using multiple
time and/or spectral-band imagery. The target detection problem is formulated as a composite hypothesis
test in which the alternative hypothesis set is parameterized by the target amplitude and location within the
image set. Thus, the size and shape of the target are assumed to be known, but nothing is assumed for the
amplitude or trajectory of the target. The probability density function for the image set is taken to be a
multivariate normal density whose covariance is a function of the clutter spatial, temporal, and/or spectral
degrees of freedom. The assumption of small targets allows the clutter statistics to be estimated directly
from the input imagery. The resulting processor is a generalized likelihood ratio test in which the
unknown target amplitude and location are replaced by their maximum likelihood estimates. The
assumption of spatially stationary background clutter allows the test statistic to be expressed as a set of
spatially filtered images in which each spatial frequency component is processed independently in time
and/or spectral band by a linear transfonnation operating on the input image set. The algorithm can be
implemented either as a block or recursive processor. The latter form requires less storage and allows the
clutter statistics to be adaptively updated frame by frame. The approach has been successfully applied to a
variety of synthetic and real imagery, including HI-CAMP I and II data. Performance estimates and
samples of unclassified processed imagery are presented.
Intercepting an object in mid-flight is quite a complex task but we see examples of it in nature all the time. Insects recognize
prey, follow it, and intercept it using relatively simple neural systems. At the same time, sophisticated computer algorithms
fail to match the degree of robustness and accuracy displayed by insects. This paper provides a scheme to utilize the optic
flow characteristics and the geometrical properties of the sensor configuration to perform fast, reliable target tracking, and
system stabilization, from crude visual inputs. Artificial neural networks are used to implement the algorithms because of
their excellent generalization capabilities which can provide an adequate response in situations where a response would not
be expected using conventional computer algorithms. Other advantages offered are the speed of computation, the simplicity
of schemas and algorithms, and robustness
The detection of small targets in infrared (IR) clutter is a
problem of critical importance to Infrared Search and
Track (IRST) systems. This paper presents techniques for
analyzing and improving the detection performance of
IRST systems. Only spatial, or single-frame, processing
will be addressed. For clutter with spatially slowly varying
statistics, our approach is based on linear filtering. Models
of target and clutter are developed and used to analyze
matched filter performance and sensitivity. This sensitivity
analysis is used to improve filter bank design. A clutter
classification scheme which can separate clutter of different
types is presented. Finally, to improve system performance
in the presence of large intensity gradients, such as
cloud edges, an improved adaptive threshold scheme is
The detection of weak targets with an infrared surveillance system is often
complicated not only by a severe clutter environment but also by background and
platform motion effects. Conventional sequentially applied algorithms combining
frame—to—frame registration, clutter rejection filtering, and adaptive thresholding
detection simply overwhelm the track processor in a weak target scenario, due to
the required lowering of detector block thresholds. To address this problem, we
have developed a 3—D filter/"track—before—detect" signal processing approach in
which an adaptive spatio—temporal filter is used for clutter suppression and a
Viterbi "track—before—detect" block is used for noncoherent target integration.
This paper discusses a 3—D adaptive filtering technique which combines time and
spatial filtering (in both azimuth and elevation directions) to achieve
simultaneous frame—to-frame registration, background clutter suppression, and
target preservation/enhancement. In addition, this 3-D filtering procedure whitens
the data, thus greatly facilitating the "track-before—detect" processing block
task. Unlike other commonly employed procedures, this technique neither entails the
suboptimal sequential application of filtering procedures (e.g., spatial followed
by temporal filtering) nor demands very accurate subpixel-level registration or
exact knowledge of the target's velocity characteristics. The only requirements are
that data frames should be roughly aligned (so the offsets are contained within the
filter window) and that the assumption of the moving target indicator (MTI) is
In this paper simulation results of the 3—D filtering procedure using real, scanned
sensor array data are presented, and the procedure performance and implementation
complexity are traded off versus adaptive spatial filtering, adaptive temporal, and
sequentially applied time/spatial filtering techniques. Also, modification and
simulation results are presented for an extension of the 3—D adaptive spatiotemporal
filtering technique, which accommodates both MTI and non-MTI case
A hierarchical data structure is presented to organize the various types of
informational entities available from infrared (IR) sensor data. This provides a
common framework in which to discuss alternate techniques applicable to the
problem of automatic acquisition and tracking of small targets. Of these
techniques, the Hierarchical Pattern Recognition (HPR) algorithm processes all
available information for the acquisition. Targets of interest are typically
unresolved by sensor optics at acquisition ranges and appear against highly
cluttered background scenes. Performance of the HPR algorithm is demonstrated
by simulation using various types of pattern classifiers, with and without the
benefit of feature data inputs representing scene context. The Viterbi
algorithm is utilized to resolve ambiguous observation-to-track pairings while
tracking an acquired target. Its performance is characterized by the expected
number of frames required to resolve such ambiguities.
Studied in this paper is the problem of achieving the optimum MTI detection performance in strong clutterof unknown spectrum, when the set of data available to the estimation of clutter statistics is small due to a severely nonhomogeneous environment. A new adaptive implementation, called the Doppler Domain Localized Generalized Likelihood Ratio processor (DDL—GLR), is ProPosed and its detection performance-derived. It is shown that the DDL-GLR is a data-efficient implementation of the high-order optimum detector, together with several advantages of practical importance over other adaptive processors.
Optical autodyne detection is a direct detection method for measuring relative Doppler frequency shifts induced by
laser illuminated object. The method is of interest in laser radars for its relative simplicity and robustness. In this paper
we discuss the estimation of ranges, relative velocity, and amplitudes for a two point target, when laser backscattered
radiation is collected with a photon- bucket receiver, and registered by an incoherent autodyne detector. An analytical
framework is developed to establish the fundamental limits on the resolution of such a system, and to obtain a quantitative
understanding of the dependence of estimation accuracy on target separations, relative velocity, and on the number of
photons available. The value of a priori knowledge for obtaining a desired estimation accuracy of a parameter of interest
is discussed for selected cases. The analysis is carried out for Poisson photon statistics. The performance of various
estimators is evaluated by utilizing the Cramér-Rao bound, for both signal and background shot noise limited conditions.
The results presented have applications for mbust laser radars, and contribute to a better understanding of the perfonnance
and potential of an autodyne detection system.
Linear, finite-impulse-response (FIR) filters are efficient for detecting point sources in data generated by a
scanning detector array. In particular, matched filters have been widely used for detection and initial
estimation of isolated sources. When separate sources have overlapping signals the matched filter does not
work as well. Others have applied deconvolution filters to this case and shown good performance with high
signal-to-noise ratios (SNR). The deconvolution filter is examined here in the context of general FIR filters.
The novel aspect of this paper is the definition and application of an optimality concept for resolving arbitrary
numbers of sources. A "resolution" attribute is defined which measures the difference between filter peak
response and minimum response between adjacent peaks. A "max_min" filter is derived which has maximal
resolution for given requirements on SNR and source separation distance. Its performance is compared with that
In detection systems (e.g. radar) , the effect of interference patterns (clutter) from the environment
are partially unknown and/or varying in terms of their statistical properties. In such
instances, the performance of the optimal detector deteriorates significantly, and a nonparametric
or constant false alarm rate (CFAR) detector which is designed to be insensitive to changes in the
underlying density functions of the clutter is needed. Order Statistic (OS) filters have been shown
to perform effectively in detection systems when the observations are independent and identically
distributed. When the assumptions about the observations are violated or the underlying distribution
of the clutter is altered, only certain ranks of the OS filter show robust performance. This
study analyzes the performance of OS filters in the framework of nonparametric detection for
cases in which the observations do not contain equivalent statistical information or observations
are not independent. Through computer simulation, the extent of robustness that can be obtained
from different ranks is illustrated.
Algorithms involving ranking are used to suppress noise induced in IR
detectors by gamma radiation, based on enhanced response to gamma
photons relative to other excitation. An analytic expression for the
characteristic function of detector response to signal plus Gaussian
noise plus gamma noise is derived; it is used to compute second
moments of order statistics corresponding to ranked groups of
redundant detector samples of an image pixel. These are used to
illustrate a simple method of 'quantifying noise-suppression potential
of simple ranking algorithms, as a function of gamma noise and ranking
parameters. Ranking suppresses noise from all sources, and selection
of low-order ranked data yields substantial SNR gain when the
probability of one gamma photon per detector sample approaches 1.0.
Temperature discrimination of closely spaced objects (CSO's) may be a difficult problem for passive sensors with
limited spatial resolution. In this paper, we establish limits on the accuracy of temperature estimation of two closely
spaced point targets having different temperatures, when such targets are observed in several spectral bands. The effect of
a priori knowledge (e.g., emissivities, areas, earthshine) on temperature estimation is discussed. An analytical framework
is developed to establish the fundamental limits on temperature resolution of a passive system, and to obtain a quantitative
understanding of the dependence of estimation accuracy on parameters describing the targets (e.g., temperatures,
emissivities), and on the number of photons available. We examine the sensitivity of temperature estimation to target
parameters and the selection of the number and location of the spectral bands in which the measurements are made. The
advantage of partial resolution of targets in one or more spectral bands is examined. The analysis is carried out for
Poisson photon statistics. The Cramr-Rao bounds (CRB) are computed for the estimation of various parameters
describing a two point source, and to the sensitivity of the CRB to different parameter values, changes in the number of
unknown parameters, and the selection of spectral bands is examined. The results presented provide a more
comprehensive definition of temperature resolution and can be useful in astronomy, multispectral scene analysis, and other
commercial and military applications.
A new pipeline approach is addressed for the task of detecting and tracking pixel-sized moving targets
from a time sequence of dynamic images of a high-noise environment. The trajectories of the moving
targets are unknown, but continuous and smooth. The image sequence contains significant, randomly
drifting background clutter, and is also contaminated by random sensor noise. The Pipeline Target Detection
Algorithm (PTDA) uses the temporal continuity of the smooth trajectories of moving targets, and
successfully detects and simultaneously tracks all the target trajectories by re-constructing them from the
time sequence of noisy images in a single targetframe in real time. The pipeline approach breaks the
constraint of straight line trajectory that most other algorithms require for the similar tasks and detects
and tracks any trajectories of arbitrary shapes as long as they are continuous and smooth. The pipeline
fashion of the aigorithm is a complete parallel distributedprocessing (PDP) type process, and therefore is
highly time efficient. It also agrees with the pattern that real image sequences are acquired, so is ideal for
real-time target detection and tracking.
This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration
was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in
each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold
produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering,
CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's
knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to
detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and
knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive
scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration
enhances the probability of target detection while maintaining an acceptably low output false alarm rate.
For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL)
at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized
and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed
all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT
Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering,
beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to
single scan performance with a nominal real time delay of less than one second between illumination and display.
Sensitivity to low—observable targets—-given a sequence of N frames of pre—processed
(e.g., spatial/time-filtered) but unthresholded data—may be enhanced by selecting,
from among all possible paths traversing these frames, those containing any
indication that a target may be present. However, since only upper bounds regarding
target velocities are usually known, explicit formulation of all feasible paths (and
accompanying confidence factors) becomes a formidable task even for small values of
N. In this paper we address this problem by utilizing a Dynamic Programming
("Viterbi") algorithm to efficiently generate and evaluate, in an unthresholded
fashion, all possible paths through the N frames. Trajectories are traced
recursively by assigning accumulated trajectory scores to each entry in a given
frame of data so as to maximize that entry's updated score. This Viterbi Track-
Before—Detect procedure differs from standard Multiple Hypothesis Testing (MHT)
methods in two ways. First, while in the MHT method the number of plausible paths
grows exponentially (hence the need for introducing thresholds), in the Viterbi
approach they remain constant, equal to the number of data entries in a frame.
Second, whereas in the MHT method trajectories are updated by selecting for each
existing trajectory the best current (thresholded) detection, in the Viterbi
approach each current data value is updated with the best trajectory up to that
point. Simulation results show that application of the Viterbi Track-Before-Detect
algorithm over ten frames of IR data yields roughly a 7 dB improvement in detection
sensitivity over conventional thresholding/peak-detection procedures.
A dynamic programming (DP) technique has been developed for the detection of subpixel-sized, low-SNR targets
observed by mosaic imaging sensors. The primary advantages of DP are its sensitivity to weak targets along with its robustness
to target maneuvers and sensor instabilities. Such enhancements are achieved by performing target data association and
detection in a single optimization procedure. To the extent that the target motion and noise behavior can be accurately
modeled, the DP algorithm is shown to be optimal for this task. A prototype JR tracking system based on the DP approach
was developed and tested for a step-staring IR camera application. Performance analysis indicates a sensitivity improvement
of several dB over conventional sequential detection tracking approaches.
This paper describes the evolutionary development of adaptive signal processing algorithms which utilize
spatial and spectral information provided by a passive infrared sensor to enhance the detectability of targets in
clutter. Key parameters affecting the performance of multi-spectral detection processors are identified and
discussed. Adaptive filtering algorithms are presented which can achieve near-optimum detection performance
with no prior knowledge of the target and background spectral properties.
Entropy and its generalization, the Kuliback-Leibler measure can be useful in characterizing the relationships
that exist between measurements. The relationships that exist when a target is present
provide a means of distinguishing these measurements from a set of measurements due to noise alone.
We utilize these ideas in developing a detection procedure that first develops a score based on measurements
using the Significance Test and then calculates the relationship between the measurements.
This relationship is the "effective number" of measurements entering into the Significance Test score
and is computed from the entropy of the normalized measurements; both a high score and "effective
number" are needed for a detection. The "effective number" is used to discriminate against false
detections, i.e., high scores, caused by a relatively few, strong, noisy measurements. We compare the
performance of this detector to more conventional detectors in the problem of detecting a target in
A neural network solution to the data association problem in multitarget tracking is presented. This requires position
and velocity measurements of the targets over two consecutive time frames. A quadratic neural energy function results that
is suitable for an optical processing implementation. Realistic target trajectories are simulated, yielding several different
scenarios with spurious measurements (clutter) and measurement noise, which are used to test the tracking ability of the
neural network. Simulation results are presented, and an overall tracking system using the neural net, Kalman filters, and a
Hough transform subsystem is discussed.
Conventional practice in single-sensor passive track is to fit azimuth-elevation object observations with polynomials in the
early stages of track initialization. At least three and usually more observations are fitted to quadratic polynomials before they
are accepted for testing with a six dimensional model. This paper develops an alternative to these polynomials based on
closed-form solutions of the trajectories in angle-space. This paper is applicable when both the sensor and target are in free
fall. The approach relies on a flat-earth assumption to achieve the solutions. Experimental comparisons are made with the
polynomial fit method.
This paper describes a PC software package for the following problem : We are given multiple
passive sensors, each with a number of detections at a given time. With each detection, there is an
associated line-of-sight measurement originating from a source. The source can be either a real target, in
which case the measurements are azimuth and elevation angles of the target, plus some measurement
noise, or a spurious one, i.e., a false alarm. Position estimates of targets can be formed by associating
measurements originating from it. Mathematically, the measurement-target association problem leads to
a generalized assignment problem. The problem of forming position estimates of multiple targets in a
dense cluster, from passive sensor measurements at a given time, require at least three sensors.
However, for three sensors, the three-dimensional assignment is known to be NP-complete, i.e., the
complexity of the optimal algorithm increases exponentially with the size of the problem. The
association problem is solved as a maximum likelihood estimation procedure; the likelihood function is
maximized through the use of a near-optimal, iterative, and polynomial-time three dimensional
assignment algorithm which employs Lagrangian relaxation technique that successively solves a series
of (polynomial time) generalized two-dimensional assignment subproblems. The algorithm is coded in
Fortran and available as an interactive PC software package PASSDAT. In this paper we present
performance results and graphical scenario description of a representative test case solved by
This paper is generally related to analytic methods for evaluating tracking performance, in particular
for predicting track accuracy in dense target environments. A very simple analytic expression
is derived to predict the effects of mis-associations on track accuracy. The paper analyzes an optimal
track-to-measurement assignment algorithm in track continuation phases, i.e., when tracks are well
This paper discusses the application of a particular implementation of Multiple Hypothesis Tracking (MHT)
to the problem of detection and tracking of dim targets in a heavy clutter or false alarm background. The MHT
method and the performance improvement associated with MHT for these applications is well documented [1—6],
but the actual implementation has been limited due to the computational load and complexity associated with "traditional"
implementations. We present an approach (Structured Branching) that offers significant computational
savings as compared with alternative approaches, and can maintain hundreds of "possible" tracks that are initiated
in a dense clutter or false alarm background without overwhelming computational or memory requirements. Further,
this method can be applied to much more limited implementations according to the computational resources
available—there is minimal "overhead" associated with Structured Branching (SB) since hypotheses are not propagated
explicitly. The SB algorithm is described, highlighting the ways in which computational savings are achieved,
and simulation results are presented. Then, approximate techniques are developed for predicting the performance of
MHT (any implementation, not just SB), and results comparing predicted performance with simulation results are
Over the last ten years a number of computer programs have been developed which do symbolic and algebraic operations as well
as numeric calculations and graphics. Mathematica is the latest entry into this arena. This report gives a brief introduction into its
capabilities and illustrates its power for engineering applications, including an example of static covariance analysis.
Real-time measuring problem about temporal locations and trajectories of multi—objects in
space is discussed. A correlation recognition method about real—time interactive measurement
is provided to solve the temporal location of object points in space. Corresponding mathe—
iatic model is established. Relation of error with bland area and fast algoritha are discussed.
Finally, the experimental results are given.
In target environments that include Electronic Countermeasures (ECM), vhere
the availability of radar range measurements is severely reduced, an integration
of ESM into a multi-sensor tracker provides an inexpensive but effective approach
to augmenting radar tracking systems. Tracking filter equations are presented for a
multi heterogeneous-sensor tracking system consisting of radars and ESM sensors.
Measurement time-based track updating is performed, due to non—periodicity of ESM
measurement acquisition times, instead of usual track—based updates. Track
updating using ESM measurements is accomplished via an EKF-based algorithm and
radar measurements via an tracking algorithm. The integration of data from
different sensor types is a logical response to increasingly hostile airborne
threats. The technique described in this paper is a straight forward and costeffective
approach for accomplishing that integration.
Plot/track association consists of assigning radar plots to predicted track positions and is
an important feature of all track while scan systems. We demonstrate the feasibility of
applying neural network technology to the plot/track association problem in such a way
as to achieve good global solutions.
The plot/track association problem can be structured in a basic framework very similar
to that of the classic Travelling Salesman Problem (TSP). Hopfield organization networks
have been applied to the TSP by various researchers, but with varying degrees of success
because of recurring instability problems. The network presented here seeks to minimize
a global cost which is a function of the distances between plots in a given scan of
data and the predicted track positions. Additionally, in this paper a novel technique is
introduced for helping to alleviate the instability of Hopfield networks.
This paper describes an application of Bayesian Networks, or Influence Diagrams, to the multitarget tracking
problem of a single, angles only, scanning sensor. The Bayesian Network combines the continuous track state
vectors and discrete report-to-track association hypotheses into one network which is then used to perform track
state vector prediction and update, and to generate, score and prune association hypotheses. The advantages of
operating on the network are discussed via an example in which a track resolves into two tracks. The example
demonstrates that the network operations provide a highly flexible, numerically stable, computationally efficient.
mechanism for calculating the state vectors, covariances and intertrack correlations of the resolved tracks. It
is shown that these intertrack correlations, which are somewhat cumbersome to maintain in the usual track
filter formulations, are automatically maintained in the network formulation and can improve track accuracy and
This paper develops a group tracking algorithm for the tracking of clusters of closely spaced targets as viewed by passive
sensors. This problem is complex because the size and shape of the observed cluster will differ from sensor to sensor. Since
a passive sensor provides projection of the objects, possible sensors in different locations will provide different projections of
the 3-dimensional (3-D) shape of a cluster. Furthermore, the number of resolved objects in a cluster can vary as closely
spaced targets may appear to a sensor as a single extended object.
In order to track the target clusters with multiple sensois, a method is needed to characterize the size, shape and location
of each cluster. The method described in this paper models the target cluster in 3-D as an effipsoid with the projection in 2-D
being an effipse. The centroid and parameteis of the ethpsoid are tracked over time using measurements from widely spaced
passive sensors. The filtering methods are described and performance is presented based upon tracking simulations.
Application of this method to tracking extended objects with multiple sensors is also discussed
In this paper target tracking as an hierarchical information extraction
process is defined and spatio-temporal factors in vision affecting motion
detection are briefly discussed. The relativistic aspects of motion perception
which quantify perceived extent, time, and velocity of moving objects are
introduced. A systems approach to the operator-display interaction is also
investigated and the role of the human operator as an optimal position and
velocity estimator and controller is presented.
This paper treats the problem of source dynamic motion evaluation in underwater applications using recursive prediction error estimation techniques. The issue of compensating for underwater motion effects arises in a number of areas of current interest such as control and operations of autonomous remotely operated vehicles, underwater seismic exploration, and buoy wave data analysis. Earlier treatments of the problem relied on frequency response methods and Kalman filtering. The present paper discusses the compensation problem using an alternative discrete model of the process and proposes use of the recursive prediction error algorithm for its solution. The algorithm is simpler than Kalman filtering in terms of the required knowledge of noise statistics and provides an attractive alternative to Kalman Filtering
An attempt is made to combine a deterministic digital filter and a stochastic
filter. The state equation is the standard form used for a Kalman filter deriva—
tion and the output equation is of the finite impulse response filter. An optimal
estimator is derived for this combined structure, called a finite impulse response
estimator (FIRE) which permits processing of a signal contaminated by deterministic
and random noises. Derivation of the FIRE utilizes the state augmentation technique
and the innovation technique. The proposed method is straight forward and
easy to implement and it can be applied to areas such as time varying signal processing
or target tracking where radar 'returns are contaminated by low frequency
noises. Full derivation and a tracking application are presented.
The paper centers on the continued development of the symmetric measurement equation (SME)
filter developed by Kamen' for track maintenance in multiple target tracking. In this approach there is no
need to correctly associate measurements and targets before target state estimation can take place. Rather
the data association problem is embedded in the process of target state estimation. The "first order"
version of the SME filter is an extended Kalman filter (EKF), and thus the computational requirements
for filter implementation are comparable to that for a standard Kalman filter. In addition, in contrast to
probabilistic data association filters, the estimator does not rely on the computation of probabilities for
correct measurement/target associations. The SME filter is based on a standard state model for the target
state trajectories. However, in contrast to existing approaches, the measurements are defmed in terms of
nonlinear symmetric functionals of the target positions, except for one of the measurements which is
defmed to be a scaled sum of the target positions. The measurement functions are defmed so that in the
noise-free case, the target position vector for each coordinate can be determined up to a permutation of
elements from knowledge of the measurements. In this paper, we define the measurements in terms of
sums of products of the target coordinate positions. The performance of the SME filter is investigated
via a computer simulation of the six-target case.
An algorithm for tracking highly maneuvering targets is presented in this paper. The algorithm provides optimal state estimates for non-maneuvering targets and robust tracking performance during target maneuvers. This technique does not make any assumptions about the probability for the jumps between target state models and requires less computations than the algorithm described in reference . Various simulation results are provided to show the performance of the algorithm for maneuvering and non-maneuvering targets and for targets with multiple maneuvers. Simulation results have shown good tracking performance for target maneuvers in the order of 10 g's.
The Bayesian solution of the problem of tracking a target in random clutter gives rise to Gaussian mixture distributions, which are composed of an ever increasing number of components. To implement such a tracking filter, the growth of components must be controlled by approximating the mixture distribution. A popular and economical scheme is the Probabilistic Data Association Filter (PDAF), which reduces the mixture to a single Gaussian component at each time step. However this approximation may destroy valuable information, especially if several significant, well spaced components are present.
In this paper, two new algorithms for reducing Gaussian mixture distributions are presented. These techniques preserve the mean and covariance of the mixture, and the fmal approximation is itself a Gaussian mixture. The reduction is achieved by successively merging pairs of components or groups of components until their number is reduced to some specified limit. Further reduction will then proceed while the approximation to the main features of the original distribution is still good.
The performance of the most economical of these algorithms has been compared with that of the PDAF for the problem of tracking a single target which moves in a plane according to a second order model. A linear sensor which measures target position is corrupted by uniformly distributed clutter. Given a detection probability of unity and perfect knowledge of initial target position and velocity, this problem depends on only twç non-dimensional parameters. Monte Carlo simulation has been employed to identify the region of this parameter space where significant performance improvement is obtained over the PDAF.
High-level algorithms are needed to take tracking sensor reports as input and produce continuous, labeled, target
tracks as output. These algorithms will aid human track analysts in sorting through ever-increasing numbers of
sensors and targets. The action of such a system is similar to that of computer vision systems; that is, it performs
intelligent reasoning about sensed data to infer the characteristics of a spatially organized set of objects. Some
of the issues that must be resolved in such a system are knowledge representation, control schemes, uncertainty
management, and timeliness. Although progress has been made in establishing a framework for such a system,
further work is necessary before a totally automated track analysis system is realized.