This paper presents a new algorithm for tracking the spectrum of non- stationary signals. In general there is no law relating frequency and time, and therefore, the frequency-time curves are usually approach dependent. The algorithm described here is an extension of the well-known Levinson model for estimating the spectra of stationary signals. The signal parameters are estimated by fitting the model with time-varying coefficients based on an exponential forgetting factor that is introduced to the autocorrelation function. The first operation is the excitation with the input sequence y(n), n equals 0, 1, 2, ..., N, to produce a scalar output, then time-updating by incrementing the previous value with a scalar. To demonstrate the effectiveness of the algorithm, some numerical examples are considered: chirp signal in white noise, two sinusoids, and speech signals.
An opto-electronic Gabor detector for transient signals is proposed. Using an acousto-optic modulator as its input device, a liquid crystal SLM as a reconfigurable window and a two dimensional CCD detector array, a real-time opto-electronic detection architecture based on the Gabor representation of signal is described. Some preliminary experimental results are presented.
A nonlinear optical filter which removes transient background clutter from tracked target images through adaptive velocity selection is presented. This pre-processing filter technique is proposed to enhance track maintenance for point source and small extended objects.
Point targets [with size equal to the point spread function (PSF) of the detector system] in star field, earth, and cloud backgrounds are considered. Topics covered include target detection from the background (using two frames of data and allowing the use of simpler sensor systems), target location estimates of the target's position to subpixel accuracy (allowing the use of fewer sensor elements with better sensitivity and cost), and track initiation (to confirm targets to be passed to a multi-target tracker). Real background sensor data and equivalently real targets are used and optical lab results are provided.
CFAR detectors have been utilized in radar systems where the clutter environment is partially unknown and/or has varying statistical properties (e.g., power). In such instances, the performance of the optimal detector deteriorates significantly, and a nonparametric or constant false alarm rate (CFAR) detector is designed to be insensitive to changes in the density functions of the clutter. An effective method of accomplishing this is to use local estimates for the threshold corresponding to the unknown (or varying) parameters of the clutter distribution. Recently, order statistic (OS) processors have been shown to perform robustly (in terms of CFAR loss) for inhomogeneous clutter observations, although these processors utilize order statistics in an inefficient manner in terms of estimating the power of the clutter distribution. For a more efficient threshold estimate and, consequently, more robust detector, the censored maximum likelihood (CML) and best linear unbiased (BLU) estimates are applied toward CFAR detection. In particular, these methods are utilized for estimation of the scale parameter of the Weibull-distributed clutter with known shape parameter. The design of these CFAR detectors and the probability of detection performance under Lehmann's alternative hypothesis are mathematically analyzed.
Government-sponsored blind tests have demonstrated that the applied analysis spectral analytical process (AASAP) is highly robust and efficient in its ability to detect sub-pixel targets in strongly cluttered backgrounds. Multispectral signals or data are processed by AASAP to search for specified targets on a pixel-by-pixel (or IFOV-by-IFOV) basis. Targets are acquired and typed based on their spectral signature alone, and the targets can occupy small fractions of mixed pixels. No scene knowledge other than the signature of the target is needed. The signatures can be derived either empirically using the sensor, or they can be modeled using laboratory or field spectral measurements. Signals or data are processed using an intelligent background removal process, and the residuals are processed to extract the signature. The extracted signatures are compared to a library of one or more reference target signatures to determine whether or not a target is present and to type it. Sponsored tests have revealed that low-spectral-contrast targets occupying as little as 15% of an IFOV (or mixed pixel) can be reliably detected with low false alarm rates using five or more spectral bands. Tests demonstrated robustness in highly cluttered and variable backgrounds and variable atmospheric conditions. AASAP is an automated process, and its efficiency and architecture are supportive of potential on-board implementations.
This paper describes a robust approach to acquisition of unresolved infrared (IR) targets in clutter. This approach is based on an optimized combination of post-detection and predetection processes. The algorithm utilizes a priori knowledge of target size and velocity relative to the background to suppress background clutter and enhance target signature. The predetection algorithm is based on a maximum likelihood ratio detection model where no statistical assumptions are made on the background. Background clutter is suppressed by background estimation and removal. The target signature is enhanced by applying a space-time filter matched to the target size and velocity after the background has been estimated and removed. The post-detection process further enhances Pd and suppresses Pfa by using an M out of N detection criteria. A performance figure, signal-to-noise ratio improvement factor (SIF) is defined. Sensitivities to SIF from the background estimation and removal process are derived and shown for an image edge primitive. Image edge magnitudes are measured in a MWIR image sequence and used to predict SIF, and the predicted SIF is compared to theoretical results.
Conventional infrared (IR) surveillance systems employ clutter rejection filtering techniques operating in a single processing domain (e.g., spatial for scanned arrays, or temporal for staring arrays), providing a poor performance in weak-target scenarios. The adaptive filtering technique proposed here utilizes the information contained in the spatial, temporal, and spectral dimensions to simultaneously implement the functions of frame registration and clutter suppression. In this approach, the background clutter covariance is estimated from data samples obtained via a 4D sliding window. Then, from a priori knowledge of the target-clutter crosscorrelation function, a filter is designed to minimize the clutter variance while preserving the target. Simulation results of the 4D adaptive filtering procedure, using real IR scanned- array sensor data, amply demonstrate the superiority of this algorithm over commonly employed sequential approaches.
Multi-frame 3D linear filtering is a powerful technique for IR target enhancement in the presence of clutter, with important applications such as the detection of small, dim targets in a long-range infrared search and track (IRST) system. While the benefit of an adaptive 3D filter has been demonstrated on specific clutter scenes in the past, a systematic analysis of its performance behavior has been lacking. This paper presents an analysis of optimal 3D filter gain versus target and background characteristics, as well as the sensitivity of this gain to mismatch. The analysis is based on parametric models of target and background, devised to capture the essential features of each. Practical approximations to the optimal 3D filter are also examined, and their performance penalty evaluated.
This paper provides analytic expressions for the performance of optimal matched filters designed to utilize spatial, temporal, and spectral observations of point targets against cluttered backgrounds. The analysis explicitly treats the situation of bipolar low contrast target signatures typical in advanced infrared systems such as the infrared search and track systems. In these cases, one must include the temporal effects due to target motion across the cluttered background and can not assume that the target signature is simply additive. The analysis also provides explicit expressions for the effects of frame-to-frame registration errors that impact the temporal performance of the filter. The analytic expressions are given in a four- dimensional Fourier transform setting which provides a concise and easily manipulated format.
The velocity filter (or 3-D matched filter) is known to be a powerful signal processing technique for detecting and tracking weak moving objects in electro-optical image sequences. To date, however, its application has been limited by the enormous amounts of hardware required to implement the large 'filter banks' that are needed to cover the prior uncertainty in apparent target velocity. This paper presents the results of an algorithm and architecture study that explored ways of significantly reducing the real-time hardware required to obtain a specified level of performance with the velocity filter approach. The most effective solution, based on an optimum single-bit velocity filter implemented in a special-purpose bit serial processor, is capable of achieving extremely high filter computation rates on a single semi- custom VLSI chip. A real-time brassboard implementation of this architecture, the Velocity Filter Processor, is currently under development at Space Computer Corporation.
This paper describes adaptive signal processing techniques that utilize spectral and temporal information provided by passive infrared imaging sensors to enhance the detectability of sub- pixel targets in clutter. The approaches are directly applicable to advanced sensors like the DARPA-sponsored MUSIC instrument, which are capable of collecting multi-spectral frame sequences in the thermal infrared region. The performance of several algorithm concepts is demonstrated by processing dual-band frame sequence data taken by the MUSIC sensor. The examples also demonstrate the importance of accurate frame registration prior to multiple- image signal processing.
Current image registration approaches tend to operate independently on each of the frames of data that are to be registered, significantly degrading performance. In this approach, however, all the frames are matched simultaneously to a reference frame, thus utilizing in a global manner the information contained in all of them. It is shown that this approach, optimal in the minimum-variance sense, provides important registration gains over current procedures.
Passive sensors provide only a few discriminants to assist in threat assessment of small targets. Tracking of the small targets provides additional discriminants. This paper discusses the system considerations for tracking small targets using passive sensors, in particular EO sensors. Tracking helps establish good versus bad detections. Discussed are the requirements to be placed on the sensor system's accuracy, with respect to knowledge of the sightline direction. The detection of weak targets sets a requirement for two levels of tracking in order to reduce processor throughput. A system characteristic is the need to track all detections. For low thresholds, this can mean a heavy track burden. Therefore, thresholds must be adaptive in order not to saturate the processors. Second-level tracks must develop a range estimate in order to assess threat. Sensor platform maneuvers are required if the targets are moving. The need for accurate pointing, good stability, and a good update rate will be shown quantitatively, relating to track accuracy and track association.
A three-dimensional moving target appears as a weak point in an image when it is observed by an optic system at a great distance. It is difficult to detect the target from image sequences in succession. In this paper, a new method is proposed. Its basic idea is energy accumulation, in which the image sequences containing the point target are shifted along a track and stacked up, and then the criterion of decide-in-segment and the method of auto-increasing thresholds as well as some other methods are employed to increase target detectability and reduce calculus. The result, which is similar to the one generated by accumulating over image sequences containing a motionless target, is obtained in this algorithm. The computer-simulated experimental result is given on the condition of SNR (signal-to-noise ratios), 6 db in white Gaussian background noise. It indicates that the moving point target (MPT) can be detected and tracked exactly in low SNR in the algorithm.
Aerodyne has recently developed an IRST Engagement Model under contract for Lockheed Aeronautical Systems Company (LASC). The model's purpose is to simulate the performance of an IRST system in long-range air-to-air detection and tracking engagements. The hallmark of the model is its end-to-end first- principles modeling of all major elements which determine specific performance. The target aircraft IR signature, the atmospheric cloud and sky background, and associated atmospheric effects are modeled at high fidelity, thereby producing an input image matched to the specific IRST under study. A detailed deterministic model of the IRST accounts for optical and sensor effects, signal processing, and track association typical of first-generation IRSTs. These model elements are coupled together along with a dynamic target and observer (IRST) trajectories model so that an analyst can specify air-to- air engagements at various velocities, ranges, and viewing angles. The analyst can study the effects of varying IRST algorithms, sensor characteristics, optical bandpass, cloud background levels, atmospheric effects, and target performance characteristics as well as varying the target aircraft itself. This computer model was designed for portability and growth.
A spacecraft is required to perform on-orbit closed-loop pointing at various objects such as other spacecraft, stars, auroral surges, and cloud structures. Simulations of algorithms of the three onboard pointing subsystems and a scene generator run concurrently on four networked PCs (plus a fifth as a file server) allowing analysis of the end-to-end closed loop pointing system. The four areas of simulations plus the networked system are discussed.
The Surveillance Test Bed (STB) is a program under development for the Strategic Defense Initiative Organization (SDIO). Its most salient features are (1) the integration of high fidelity backgrounds and optical signal processing models with algorithms for sensor tasking, bulk filtering, track/correlation and discrimination and (2) the integration of radar and optical estimates for track and discrimination. Backgrounds include induced environments such as nuclear events, fragments and debris, and natural environments, such as earth limb, zodiacal light, stars, sun and moon. At the highest level of fidelity, optical emulation hardware combines environmental information with threat information to produce detector samples for signal processing algorithms/hardware under test. Simulation of visible sensors and radars model measurement degradation due to the various environmental effects. The modeled threat is composed of multiple object classes. The number of discrimination classes are further increased by inclusion of fragments, debris and stars. High fidelity measurements will be used to drive bulk filtering algorithms that seek to reject fragments and debris and, in the case of optical sensors, stars. The output of the bulk filters will be used to drive track/correlation algorithms. Track algorithm output will include sequences of measurements that have been degraded by backgrounds, closely spaced objects (CSOs), signal processing errors, bulk filtering errors and miscorrelations; these measurements will be presented as input to the discrimination algorithms. The STB will implement baseline IR track file editing and IR and radar feature extraction and classification algorithms. The baseline will also include data fusion algorithms which will allow the combination of discrimination estimates from multiple sensors, including IR and radar; alternative discrimination algorithms may be substituted for the baseline after STB completion.
The phenomenon of persistence is introduced and potential problems related to target detection and tracking in its presence are discussed. The authors present a set of algorithms suitable for real-time detection and tracking of various combinations of persistence and target types.
Owing to the steady-state characteristics of the Kalman filter, it can only track targets with known maneuver values. In this paper, the authors describe an incremental model for maneuver detection and estimation for use in target tracking with the Kalman filter. The approach is similar to the multiple Kalman filter bank, but with a memory of the maneuver condition instead of using a set of pre-assumed maneuver values.
The tracking lag due to an accelerating or maneuvering target may become significant compared to sensor noise, especially in the case of highly accurate optical or IR line-of-sight measurements. The authors construct a maneuver detector by utilizing an adaptive gain alpha- beta filter and analytic formulations for the tracking residuals. Maneuver detection is accomplished by examining tracking error residuals and comparing with expected results. The use of an alpha-beta filter also serves as a tracking function with gains controlled by the output of the detector and sensed tracking lags. Numerical simulations showing tracking improvement over fixed gain alpha-beta and alpha-beta-gamma filters are given.
The symmetric measurement equation (SME) filter approach to track maintenance in multiple target tracking developed by Kamen is applied to the case when the target dynamics are modeled in Cartesian coordinates and the measurements are given in polar coordinates. The key idea of the SME filter approach developed in this paper is to define new measurements that are sums of products of range measurements, elevation measurements, and azimuth measurements. In this way the problem of target/measurement association is embedded in the process of target state estimation. For N targets moving in three-dimensional space, the first order version of the SME filter is an extended Kalman filter of dimension 6N. The performance of the SME filter is investigated via a computer simulation of six targets with crossing trajectories.
This paper considers the problem of tracking a boost vehicle and predicting its future position using a satellite-based passive optical sensor. The portion of the powered-flight under observation is assumed to be relatively short. Two approaches are studied: a profile-based iterative least-squares algorithm and an extended Kalman filter. Their performances are evaluated using simulated test cases.
This paper presents an implementation of a tracking filter for use with angle-only measurement data--such as for use with IR sensor data--with principle application in an air-to- air multiple target tracking environment. The implementation is based upon the modified spherical coordinate (MSC) method. Among the implementation issues discussed are the following: (a) Choice of process noise model. The choice of process noise model must balance the competing needs of filter derivative state convergence versus the prevention of excessive lags during heavy target maneuvers. A maneuver detection scheme is implemented to achieve this balance. The maneuver detection scheme utilizes a test on measurement residuals as well as a batch processor verification of the recursive Kalman filter states. (b) Choice of ownship maneuver for target ranging. The strength and duration of ownship maneuvers impacts the rate and level of range convergence. The orientation of the maneuver relative to target position impacts range observability. In a gross sense, the maneuver should attempt to maximize the projection of the difference between the ownship position following the maneuver and its projected position had it not maneuvered onto the cross-line-of-sight to the target. (c) The effect of sampling rate, measurement accuracy, and target maneuver upon filter performance. Of primary consideration is the impact of these factors on range estimation accuracy. Expected performance is illustrated through the use of Monte Carlo simulation results.
A global modeling approach for multisensor problems has been investigated. It is also applied to multi-target tracking problems. The key development of this approach is that a decentralized filtering algorithm is used for data fusion and state estimation problems in a multi-target tracking system. Via the proposed mathematical models, the data fusion and track-to-track correlation problems can be solved in a global way.
This paper treats the effect of common-sensor and process noise on track-to-track association logic, which defines individual track-to-measurement association logic. Using a joint probabilistic approach, the processing unifies two fundamental correlation algorithms into one algorithm. The approach reduces computations, track-to-track misassociations, and individual track contaminations. Additionally, the logic improves sensor-to-sensor consistency in target motion and accuracy in sensor measurement and filtering.
Problems in multi-sensor data fusion are addressed for passive (angle-only) sensors; the example used is a constellation of IR sensors on satellites in low-earth orbit, viewing up to several hundred ballistic missile targets. The sensor data used in the methodology of the report is 'post-detection,' with targets resolved on single pixels (it is possible for several targets to be resolved on the same pixel). A 'scan' by a sensor is modeled by the formation of a rectangular focal plane image of lit pixels (bits with value 1), representing the presence of at least one target, and unlit pixels (bits with value 0), representing the absence of a target, at a particular time. Approaches and algorithmic solutions are developed which address the following passive sensor data fusion problems: scan-to-scan target association, and association classification. The ultimate objective is to estimate target states, for use in a larger battle management system. Results indicate that successful scan-to-scan target association is feasible at scan rates >=2 Hz, independent of resolution. Sensor-to-sensor target association is difficult at low resolution; even with high-resolution sensors the performance of a standard two-sensor single scan approach is variable and unpredictable, since it is a function of the relative geometry of sensors and targets. A single-scan approach using the Varad algorithm and three sensors is not as sensitive to this relative geometry, but is usable only for high-resolution sensors. Innovative multi-scan and multi-sensor modifications of the three- sensor Varad algorithm are developed which provide excellent performance for a wide range of sensor resolutions. The multi-sensor multi-scan methodology also provides accurate information on the classification of target associations as correct or incorrect. For the scenarios examined with resolution cell sizes ranging from 300 m to 2 km, association errors are less than 5% and essentially no classification errors are made, when sensor data is integrated over a 60 s time period. With higher-resolution sensors, better results are achievable in less time. The results of the data fusion from three or more sensors over such a period of time provide a rich source of information for the estimation of target states. The algorithms are fast (O(n ln n)); for approximately 100 targets, the average processing per scan in the multi-scan three-sensor methodology takes approximately a second of computational time on a Mac II.
This paper describes a new theoretical bound on performance for tracking in a dense multiple target environment with clutter, false alarms, missed detections, target maneuvers, etc. It is a Cramer-Rao bound on the estimation error covariance matrix. In contrast to the recent bound reported elsewhere, the bound described in this paper requires no Monte Carlo simulation. The new bound is based on Kamen's formulation of the multiple target tracking (MTT) problem as a nonlinear estimation problem. Kamen's model of MTT eliminates the combinatorial aspect of the problem, and replaces it with an analytical problem, which allows the use of the Cramer-Rao bound.
A fundamental problem in multi-target tracking is the data association problem of partitioning the observations into tracks and false alarms so that an accurate estimate of the true tracks can be recovered. Here, this problem is formulated as a multi-dimensional assignment problem using gating techniques to introduce sparsity into the problem, filtering techniques to generate tracks which are then used to score each assignment of a collection of observations to its corresponding filtered track. Problem complexity is further reduced by decomposing the problem into disjoint components using graph theoretic methods. A recursive Lagrangean relaxation algorithm is then developed to obtain high quality suboptimal solutions in real-time. The algorithms are, however, applicable to a large class of sparse multi-dimensional assignment problems arising in general multi-target and multi-sensor tracking. Results of extensive numerical testing are presented for a case study to demonstrate the speed, robustness, and exceptional quality of the solutions.
This paper analyzes the performance of a version of the multiple hypothesis tracking concept applied to the group-to-object tracking problem for a pair of passive, scanning, space-based sensors. Group-to-object tracking is the process of tracking clusters of unresolved or nearly resolved objects as a group and then when individual objects become clearly resolved, tracking each object separately. An n scan back MHT algorithm, based on the work by Reid, is used to manage the scan-to-scan association process. It uses the A* search algorithm to find the best hypothesis and its statistically equivalent neighbors, based on their likelihood scores. An analysis is made of the performance and cost of the algorithm as a function of key algorithm parameters.
A new tracking system has been designed for the spot target in infrared image sequences. An MLE method for estimating the target position and a differential method for the target displacement estimation were investigated. An image sequence filter has been introduced in this system, to improve the estimation of target image function in real time. Monte-Carlo experiments have been done to evaluate the system, and it is shown that the system works steadily with high accuracy.
A neural data association process during the motion perception which may occur at a low level in the human visual system is described. Principles of the neural process are then applied to the data association (DA) problem arising in radar target tracking. After reviewing radar tracking operations and problems with current DA algorithms, a new biological model based approach is presented.
A three-component system for tracking multiple moving objects is presented. A neural network is used to perform frame-to-frame data association. A Hough Transform system is used to perform multiple-frame data association and track correction. An estimation filter system is used to provide updated track estimates. The tracking ability of this integrated system is tested with realistic simulated flight trajectories. The system response to simulated measurement noise and estimation errors is detailed, and the interaction of the three system components to correct errors is illustrated. Optical processing is used in the neural net and Hough Transform systems.
A blue force platform (own-ship) contains a sensor suite from which a local track file is developed. In addition, using side information from other blue sensors, own-ship develops a remote track file that represents blue forces tracked by red. The origin of the remote track file in the local reference frame (grid reference) is not known by own-ship. To determine if own-ship has been targeted by red forces, own-ship requires the probability that it is in the remote track file. In addition, an estimate of the grid reference is required. The tracks are assumed to consist of a sequence of independent measurements with Gaussian errors. The likelihood function for the local and remote tracks conditioned on the actual object trajectories, grid reference, number of objects and the association between objects and tracks is derived. Unfortunately, the likelihood function is independent of the number of objects, which leads to a situation where the likelihood is maximized when all tracks correspond to distinct objects. This situation is avoided by using the minimum description length (MDL) principle, which includes a term that penalizes an overparameterization of the model. Using MDL, an algorithm is presented for estimating the grid reference and for computing the probability that own-ship is tracked by blue forces. A Monte Carlo performance analysis of the algorithm is presented.
We abandon, as a futile endeavor, the computation of a meaningful initial orbital element set based on anglesonly data. Rather, for the ballistic missile intitial orbit determination problem in particular, the concept of "launch folders" is extended. This allows one to decouple the observational data from the initial orbit determination problem per se. The observational data is only used to select among the possible orbital element sets in the group of folders. Monte Carlo simulations using up to 7200 orbital element sets are described herein. The results are compared to the true orbital element set and the one a good radar would have been able to produce if co—located with the optical sensor. The simplest version of the new method routinely outperforms the radar initial orbital element set by a factor of two in future miss distance. In addition, not only can one produce a differentially corrected orbital element set via this approach—after only two measurements of direction—one can also calculate an updated, meaningful, six-dimensional covariance array for it. This technique represents a significant advance in initial orbit determination for this problem and the concept can easily be extended to minor planets and artificial satellites.
The thrust of this paper is to present a new approach to multi-target tracking for the mid-course stage of the Strategic Defense Initiative (SDI). This approach is based upon a continuum representation of a cluster of flying objects. We assume that the velocities of the flying objects can be embedded into a smooth velocity field. This assumption is based upon the impossibility of encounters in a high density cluster between the flying objects. Therefore, the problem is reduced to an identification of a moving continuum based upon consecutive time frame observations. In contradistinction to the previous approaches, here each target is considered as a center of a small continuous neighborhood subjected to a local-affine transformation, and therefore, the target trajectories do not mix. Obviously, their mixture in plane of sensor view is apparent. The approach is illustrated by an example.
A simple, practical method for generating an hexagonal sampling lattice is described for an E-O scanning sensor. Matched filtering techniques are developed for hexagonally sampled data. An efficient approximate matched filter is described which uses integer weights and takes advantage of the structure of the data. Relative to this filter, the number of processing operations increases for a rectangular sampling lattice, by a factor of six for an ideal matched filter and by 25 percent for an approximate filter with integer weights.
Scoring methods are described for evaluating the performance of a multiple target tracking (MTT) algorithm fairly, without undue bias towards any particular type. The methods were initially developed by individuals and further developed and adapted by the members of the SDI panels on tracking. Ambiguous track-to-target truth association is the fundamental difficulty in MTT performance evaluation. The methods use a global nearest neighbor assignment algorithm to uniquely associate tracks to targets. With the track- to-target associations, the methods employ measures of effectiveness for track purity, data association, state estimation accuracy, and credibility of filter calculated covariance.
Sensor systems being conceived and fabricated today have high processor content and tend to be software intensive. Recent advances in hardware technology have made possible the implementation of complex high-performance multisensor algorithms. Hardware capability alone is not sufficient to ensure the successful implementation of these software intensive systems. In addition, it is necessary to carefully structure the processing system to ensure that high-performance multisensor algorithms can be implemented. Traditionally, implementability has meant throughput and memory capacity. This paper extends implementability to include the ability of a processing system architecture to support efficient integration and test. This is particularly important when the implicit requirements of cost and schedule are considered. To illustrate processor architecture impact on integration and test, an example from the LOWKATER program is examined. This was an extremely complex multisensor processor containing a digital laser radar receiver, a passive sensor target detector, and a digital controller for the beam pointing system. Architectural alternatives are presented and the selected architecture is examined in detail.
A weighted-difference signal-processing algorithm for detecting ground targets by using dual- band IR data was investigated. Three variations of the algorithm were evaluated: (1) simple differences; (2) minimum noise; and (3) maximum SNR. The theoretical performance was compared to measured performance for two scenes collected by the NASA TIMS sensor over a rural area near Adelaide, Australia, and over a wooded area near the Redstone Arsenal. The theoretical and measured results agreed extremely well. For a given correlation coefficient and color ratio, the amount of signal-to-noise ratio gain can be predicted. However, target input SNRs and color ratios can vary considerably. For the targets and scenes evaluated here, the typical gains achieved ranged from a few dB loss (targets without color) to a maximum of approximately 20 dB.