PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A technique is described for recovering positional and radiometric information on unresolved objects that are so closely spaced that their individual blur functions overlap. Emphasis is on point sources. A Bayesian method has been formulated and applied with real data to resolving `clumps' of stars. The method is able to provide error bars in the individual pulse positions and amplitudes from a single data set rather than from the deviations observed after measuring many independent sets of data. The Bayesian technique is advantageous for estimating the number of discrete objects in a given clump using the rules of probability theory without the need for contrived penalty factors. By the way it formulates the model, the Bayesian approach naturally includes a factor which reflects the reduction in the number of degrees of freedom for a model with a greater number of sources. As a result, the algorithm is able to compute the highest probability for the model with the correct number of sources in the clump even though a model with more sources has smaller residuals. The technique is applied to real visible CCD data of observations of star clusters NGC 6819 and is shown to be internally consistent in counting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an integrated, theoretically self-consistent approach to the optimal processing of non-Gaussian data for weak-signal image processing in multichannel correlated clutter backgrounds. Our approach combines linear predictive filtering with locally optimal detection theory to perform both intra- and interchannel whitening and soft editing within the context of Neyman-Pearson likelihood ratio processing. Whitening coefficients are estimated from a multichannel formulation of the Yule-Walker equations. Soft editing is performed by way of a nonlinear operator computed from the multichannel joint density of whitened residuals, which, in the Gaussian limit, is shown to reduce to a channel-dependent conditional mean subtraction. We assume the signal to be deterministic. In addition, we also assume that non-Gaussian departures in the data are limited in both space and time. Processing results are presented for both simulated and real data and are compared with standard Fourier-based approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines a special class of signal processing technique called Minimum Variance Deconvolution (MVD) method and compares it with the commonly used Matched filtering and Wiener filtering based deconvolution processing in point source acquisition signal processing applications for Electro-Optical sensors. A step by step development of the batch and recursive MVD algorithm including comparison with the competing methods are presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We conduct a comparative study of the performance of a variety of spatial filtering techniques for detecting small targets against infrared terrain backgrounds. We consider parametric and nonparametric filters as well as an example of a robust estimation technique as applied to a parametric filter. In addition, we consider the effects of a clustering algorithm on filtering performance. Some of the filtering algorithms that we consider are matched filters, Laplacian filters, quadratic filters and the Robinson filter. Several filter dimensions are considered as well as the effects of a guard band. These algorithms are tested against the Lincoln Laboratory Infrared Measurement Sensor database which is comprised of high resolution dual band (3.5 to 5.2 (mu) , and 8 to 12 (mu) ) data taken at a variety of sites under a broad spectrum of atmospheric conditions and times of day. Target insertion effects are modeled carefully, incorporating the straddling of targets across adjacent pixels as well as the effects of diffractive blurring. We find that many of the filters that we have tested show similar performance and all are clutter limited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a new approach to radar target discrimination. Specifically, we will apply it to the problem of exo-atmospheric object discrimination from UHF radar returns. The method uses wavelet transforms, pattern recognition techniques such as feature spaces, vectors and neural net classifiers. Feature vectors for each object are constructed from the wavelet transforms of the input data samples. The feature vectors are based on energies at each scale of the wavelet transforms and therefore effectively circumvent the problem of noncoherence due to target and ionospheric effects. This is a very important consideration when coherent signal processing is not feasible. The feature vectors are input to an unsupervised learning neural network for classification of the objects. In unsupervised learning, the network output is not forced towards a desired response for each input pattern but allowed to learn proximity to past input patterns. Limited results from simulated radar cross-section (RCS) data indicate that most objects can be correctly classified. The results also show that the overall scheme is quite immune to fair amounts of gaussian as well as uniformly distributed noise. Further efforts are under way to test the methodology against real object data as well as more extensive simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In radar problems involving weak signal detection conventional space-time processing cannot be used to separate the target from the clutter when the spatial and Doppler spectra of the target and clutter overlap. In such problems the concept of the locally optimum detector (LOD) is useful in coming up with a decision rule to discriminate between the two hypotheses of signal present or signal absent. The clutter, which may be correlated, can arise from either a non-Gaussian or Gaussian random process. For correlated multivariate K-distributed clutter it is shown in this paper that performance improvements can be obtained with an LOD receiver compared to the conventional Gaussian linear receiver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we extend the capabilities of the Wave Process in three major areas. This is a continuation of work that was reported earlier. The Wave Process can now adapt its behavior to detect a point target moving with an arbitrary velocity while still rejecting stationary background and clutter. It can also adapt to different target velocities simultaneously occurring at different locations on the focal plane. This is achieved through neuromorphic methods, without recourse to clocks, programmability, or other aspects of the digital processing paradigm. We also present a single planar circuit that performs the functions of the positive, negative, and wave-sum planes, with a more efficient analog VLSI implementation for on- focal plane integration. Finally, we develop a more through understanding of the Wave Process performance through numerical simulations of the ideal modeling equations and SPICE simulations of the ideal hardware representations for stationary sources and moving targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper extends the maximum likelihood concept as applied to the adaptive detection of sub-pixel targets with unknown spectral signatures. The clutter is modeled stochastically with a spatial-spectral covariance matrix. The target model is primarily stochastic and partially deterministic. Within any given spectral band the spatial target signature is deterministic. For the sub-pixel target application, a system point spread function (PSF) is used. The PSF is allowed to vary spectrally, due to the dependency of a sensor's diffractive PSF on the spectral wavelength. The spectral target signature is completely stochastic and must be determined at each pixel using maximum likelihood estimation techniques. Based on these assumptions, an optimal maximum likelihood processor is derived. Encouraging performance results are presented from real IR data. Detection probabilities are shown in many cases to improve significantly when compared to spatial-only detection processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extensive measurements of targets and backgrounds were made in the field by an infrared Fourier transform spectrometer. These measurements were made to provide statistically valid estimates of target to background spectral contrast and background spatial and spectral statistics to support the use of multispectral sensing techniques for detecting military targets in clutter. The details of the spectrometer, the targets, the backgrounds, and the measurements are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thermal infrared multi-spectral field measurements of test panels, military vehicles, and backgrounds were extensively analyzed to assess the potential of multi-spectral processing for detecting low-contrast ground targets in vegetation clutter. The measurements clearly show the existence of exploitable color due to fine-scale variations in target-background spectral contrast, and they establish environment limits on coherent multi-band clutter suppression based on background spectral correlation. Typical variations in key multi-spectral performance parameters, and their implications for waveband selection, sensor design, and robust target detection performance, are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A single pixel hyperspectral signature model was developed to support detailed infrared phenomenological analysis, optimal spectral band selection, and target detector algorithm development. Model predictions of infrared hyperspectral signatures of targets and backgrounds were compared to measurements made in the field by an infrared Fourier transform spectrometer. The model was successful in predicting the actual spectral features present in the measured data. Details of the model, an example of its accuracy, and the implications for infrared phenomenological analysis and multispectral target detection algorithms are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Long term integration, defined as integration along paths through multiple resolvable volumes, can significantly increase moving target detection sensitivity in clutter. Physically, this improvement comes primarily from the ability to smooth over and thus reduce clutter spatial variations that are responsible for its "spiky" statistical behavior. A secondary improvement comes from increased ability to smooth over aspect-dependent target fluctuations. However, since target motion is not known, increased complexity results from the need to implement multiple hypothesized integration paths. Implementation complexity depends on integration duration, spatial resolution, bounds on target speed and maneuverability, and the subsequent path mismatch losses. The current analysis focuses on point defense using a predetermined set of constant radial velocity paths that do not cross fixed beam patterns; i.e., simple "range walks". The integration across resolvable volumes is perlormed noncoherently, although coherent integration can take place within resolvable volumes. Thus, coherent integration across resolvable volumes is not considered. The K-density spatial variation clutter model is used, where the pulse to pulse temporal fluctuation is a correlated complex Gaussian random process whose "local" mean varies independently from scan to scan according to a gamma density. Although any form of coherent processing can be included, an MTI filter is used explicitly. Although propagation effects are not considered, long term integration should provide an additional detection sensitivity improvement by increased ability to smooth over these fluctuations. Starting with conventional integration in a single resolvable volume, detection performance is determined using a novel approximation that involves finding the equivalent number of statistically independent returns before square law detection. This allows the results of Marcum and Swerling to be modified and used as conditional probabilities in a numerical integration. The approximation also makes the long term noncoherent integration problem numerically tractable using density cumulants in a generalized Laguerre series. In a tradeoff where the number of pulses coherently integrated per dwell, the scan revisit rate, and the number of dwells noncoherently integrated all vary according to a fixed radar resource constraint, results show that transmitting the shortest dwells possible, including MTI fill pulses, yields optimum results. Although this depends on the scenario, for close-in point defense in spiky sea clutter this implies that clutter and target smoothing advantages offered by long term noncoherent integration outweigh greater SIR buildup offered by increased coherent integration within a resolvable volume. A final comparison with this scenario shows that long term integration increases SIR sensitivity by about 10 dB over conventional cumulative detection, allowing a shipboard horizon scanning system to detect -30 dBsm subsonic sea-skimmers with d=°•9 Pf=l 0-6 at 10 km range in sea state 4.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A detection algorithm was developed using the Maximum Likelihood Adaptive Neural System a neural network that adaptively estimates the probability density functions (pdf) of all classes of objects in the data set. This algorithms was used to detect downed aircraft in a heavy clutter environment in SAR images. In this study the portion of the image under study contains hundreds of thousands of pixels, the pixel statistics are estimated and the pixel having the lowest likelihood is labeled as the target pixel. This is an unsupervised learning approach to the target detection problem because no learning data on the background or the target is used to detect the target. The approach relies on an accurate estimate of the image likelihood function in order to provide a good characterization of the scene. This approach was applied to several images collected in a variety of heavily wooded areas under different environmental conditions with excellent results. This approach was also used to provide insight into the scene phenomenology by associating specific basis functions in each likelihood characterization with particular attributes of the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with optimum point target detection in a single-frame, multicolor image, such as a multispectral infrared or polarimetric synthetic aperture radar picture. Criteria for optimum filtering here include either maximum output signal-to-noise ratio or a (local, adaptive) Gaussian hypothesis test to distinguish between clutter-alone versus target-plus- clutter. The multicovariance filter completely uses all the joint variability of the problem, in both space and frequency, in a way that generalizes both the traditional spatial matched filter and also techniques involving scalar ratios between frequency bands. This full generalization involves possibly very large matrix blocks, which describe statistical correlations in both space and frequency, not just scalar correlation coefficients between two bands at a time. Some simple conceptual models and examples are discussed which help reduce the complexity of what is potentially a very large linear algebra problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Temporal frame integration performance in white noise is investigated as a function of the sensor point spread function (PSF), the uncompensated sensor jitter level and the velocity bin quantization used in the algorithm implementation. The analysis is provided for both recursive and non-recursive frame integration implementations. The results show that for a given sensor PSF, jitter level and velocity bin quantization, a limit exists on how many frames can be effectively integrated assuming constant velocity hypotheses for the target. It is also shown that to obtain maximum performance from multi-frame integration, knowledge of the residual rms jitter level needs to be included as part of the optimal space-time filter. A simple non- linear method for reducing the frame integration loss when there is uncompensated sensor jitter present in the data is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Jon A. Magnuson, Mitchell Troy, Mark C. Gibney, Kenneth R. Krall, Jon W. Tindall, Bradley A. Flanders, Michael A. Kovacich, David D. McIntyre, William E. Lutjens, et al.
The Midcourse Space Experiment program will launch a satellite with several optical surveillance sensors onboard that will observe targets launched separately in dedicated and cooperative target programs. The satellite is scheduled to be launched in 1994 and the targets will be observed in several missions over the ensuing eighteen months. The Early Midcourse Target Experiments Team is developing ground based software that will process data to collect target signature phenomenology and demonstrate key surveillance system functions of the IR sensors during the early midcourse phase of a ballistic missile trajectory. Satellite sensor data will be transmitted to the ground and hosted at the Early Midcourse Data Analysis Center (EMDAC). The Early Midcourse Data Reduction and Analysis Workstation (EMDRAW) is a testbed for the algorithm chain of software modules which process the data from end to end, from Time Dependent Processing through object detection and tracking to discrimination. This paper will present the EMDRAW testbed and the baseline algorithm chain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Litton Data Systems Vector Neural Network (VNN) is a unique multi-scan integration algorithm currently in development. The target of interest is a low-flying cruise missile. Current tactical radar cannot detect and track the missile in ground clutter at tactically useful ranges. The VNN solves this problem by integrating the energy from multiple frames to effectively increase the target's signal-to-noise ratio. The implementation plan is addressing the APG-63 radar. Real-time results will be available by March 1994.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Classically, sensor signal processing and data processing (i.e., tracking) have been performed separately with very little interaction between the two functions. Furthermore, the signal processing and tracking algorithms are often designed separately. This paper discusses some concepts for integrating the signal processing and tracking functions for a phase array radar. Since phased array radars provide a rapid beam steering capability, proper control of the radar beam has the potential for significantly improving the tracking of multiple maneuvering targets. However, when the signal processing is accomplished separately from the tracking, optimizing the detection thresholds for targets with fluctuating radar cross sections, resolving multiple targets, and reducing the errors due to multipath and glint must be accomplished over a single radar dwell period. Integrating the signal processing with the tracking will allow many of these issues to be addressed over multiple radar dwells. The issues associated with integrating the signal processing and tracking functions are discussed with respect to tracking and data association, revisit time and waveform energy calculations, and waveform selection. The waveform selection is discussed relative to four specific examples that include a fluctuating radar cross section from an extended target, two closely spaced targets, a splitting target, and a target in the presence of radar multipath.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A scenario-based model has been developed to predict performance of infrared imaging sensors, including optimal and suboptimal processing gains from filtering and tracking. The geometry-based driver allows easy setup of physically meaningful scenarios, including a 3D extended target model (thermal emission and reflected earth, sun, and sky radiance), clutter background, and MODTRAN-based atmospherics. The sensor model accounts for optics, detector, scanning, platform jitter, pattern and sensor noise, and focal plane sampling. Integrated filter and tracker models allow for end-to-end trades and assessing the relative impact of filter and tracker processing. The filter model is a fourier-based ESNR model with a range of filter and registration options. The tracker model is likelihood-based, not simulation or Monte-Carlo, allowing quick identification of dominant effects on performance. Log- likelihood evolves with measurement updates and spreading loss from plant noise, and its statistics characterize optimal tracker performance. Log-likelihood field statistics reveal the effects of suboptimal processing, including covariance misestimation and peak strength thresholding. Various physically meaningful outputs include minimum time to confirm track, ROC curves, and noise exceedance plots. Trade studies generated from this model are presented, illustrating dependencies on scenario, sensor, and signal processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor resolution is crucial for the success of tracking in a dense multiple target environment. The probability of resolution (PR) and the probability of correct data association (PDA) are computed as a function of: (1) average object separation, (2) sensor resolution, and (3) one-sigma prediction error, for sensor measurements with dimensions n equals 1, 2, 3, ... . The values of PR and PDA are plotted versus the average object separation normalized by the sensor resolution, parameterized by the one-sigma prediction error, for values of n equals 1, 2, 3, ... . By inspection of these curves, it is obvious that PR is less than PDA for n equals 1, 2, 3, ... for any values of average object separation and sensor resolution, for almost any practical value of one-sigma prediction error of interest. This means that sensor resolution is a more important issue than data association in most practical applications. Nevertheless, 99 percent of the literature on multiple target tracking has ignored the issue of resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gating is the process of reducing misassociations of new detections with a track. Optimization of gate size reduces the chance of miscorrelations, while allowing sufficient size for retention of maneuvering targets. A method is described by which gate size is optimized considering the following tracker and target characteristics: track data rate, measurement error statistics and target acceleration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel Kalman filter for track maintenance in multitarget tracking using thresholded sensor data at high target/clutter densities and low detection levels. The filter is robust against tracking errors induced by crossing tracks, clutter and missed detections and the computational complexity of the filter scales well with problem size. There are two key features that differentiate this approach from earlier work. First, in order to enhance tracking of close tracks, the filter explicitly models the error correlations that occur between such target pairs. These error correlations arise due to the measurement to track association ambiguity present when target separations are comparable to the measurement errors in the sensors. Second, in order to reduce the computational load, the filter exploits techniques from statistical field theory to simplify the combinatorial complexity of measurement to track association. This is accomplished by developing a mean-field approximation to the summation over all associations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a multi-target multi-measurement environment, knowledge of the measurement-to-track assignments is typically unavailable to the tracking algorithm. In this paper, a strictly probabilistic approach to the measurement-to-track assignment problem is taken. Measurements are not assigned to tracks as in traditional multi-hypothesis tracking (MHT) algorithms; instead, the probability that each measurement belongs to each track is estimated using a maximum likelihood algorithm derived by the method of Expectation-Maximization. These measurement-to-track probability estimates are intrinsic to the multi-target tracker called the probabilistic multi-hypothesis tracking (PMHT) algorithm. Unlike MHT algorithms, the PMHT algorithm does not maintain explicit hypothesis lists. The PMHT algorithm is computationally practical because it requires neither enumeration of measurement-to-track assignments nor pruning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The future need to detect and track low-observable targets against clutter backgrounds means that the track processor will be required to cope with much higher false detection rates than can be handled by traditional `zero-scan' tracking algorithms. We have developed a Bi-Level MHT algorithm which is capable of tracking in such a demanding environment. This paper provides an update to this previously reported algorithm, and extends previously reported single-target performance results to the case of two crossing targets. Specifically, we present Monte Carlo simulation results characterizing the ability of the algorithm to hold onto two target tracks as they cross, under a range of false detection densities. We will also assess the loss in performance due to target interaction, as well as gain in performance obtained from propagating multiple hypothesis. Finally, we will give an indication of computational complexity by measuring various operation counts as a function of false detection density.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, it is shown that multiple target tracking without having the ghosting problem can be achieved by implementing three SME filters in parallel, each of which is defined in terms of 2N functionals. The three SME filters are defined so that the first one tracks the (x,y) coordinate positions and velocities, the second one tracks the (y,z) coordinate positions and velocities and finally, the third filter tracks the (y,z) coordinate positions and velocities. A novel strategy is proposed for combining the outputs of the three SME filters to yield the estimates of the positions and velocities of the targets in 3D space. The performance of the resulting parallel SME filter is investigated via computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The measurement that is `closest' to the predicted target measurement is known as the `nearest neighbor' measurement in target tracking. A common method currently in wide use for tracking in clutter is the so-called nearest neighbor filter, which uses only the nearest neighbor measurement as if it is the true one. This paper presents a technique for prediction without recourse to expensive Monte Carlo simulations of the performance of the nearest neighbor filter. This technique can quantify the dynamic process of tracking divergence as well as the steady state performance. The technique is based on a general approach to the performance prediction of algorithms with both continuous and discrete uncertainties developed recently by the authors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with global track initiation problem and our purpose is to apply a quite efficient method called the Multiple Hypothesis Filter to solve it for Radar applications. This original measurement-oriented approach has been implemented in an object-oriented way and applied to simulated and realdata. This paper presents some results of MHF perfonnance evaluation which are based on determination of VfIOUS track initiation criteria (initiation time, initiation lapse. false track rate, ...)
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the advent of target tracking systems employing a diverse mixture of sensors, there has been increasing recognition by air defense system planners and other military system analysts of the need to integrate these tracks so that a clear air picture can be obtained in a command center. A popular methodology to achieve this goal is to perform track-to-track fusion, which performs track-to-track association as well as kinematic state vector fusion. This paper seeks to answer analytically the extent of improvement achievable by means of kinetic state vector fusion when the tracks are obtained from dissimilar sensors (e.g., Radar/ESM/IRST/IFF). It is well known that evaluation of the performance of state vector fusion algorithms at steady state must take into account the effects of cross-correlation between eligible tracks introduced by the input noise which, unfortunately, is often neglected because of added computational complexity. In this paper, an expression for the steady-state cross-covariance matrix for a 2D state vector track-to-track fusion is obtained. This matrix is shown to be a function of the parameters of the Kalman filters associated with the candidate tracks being fused. Conditions for positive definiteness of the cross-covariance matrix have been derived and the effect of positive definiteness on performance of track-to-track fusion is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fusion algorithm is presented for a multisensor tracking system, in which the local trackers are N-scan data association filters. Previously, a fusion algorithm was given for the case where the local trackers are JPDA filters. Here, a fusion algorithm is presented for the more general case of local N-scan data association filters, of which the JPDA is a special case (N equals 0). The fusion equations consist of a simultaneous updating of the global hypothesis probabilities, and conditional global target state estimates. Two communication schemes between the local trackers and global processor are considered. A unidirectional communication scheme is examined in which the local trackers send the updated hypothesis probabilities and conditional target state estimates to the global processor; the local nodes then continue to track without knowledge of the global estimates. A bidirectional communication scheme is examined in which the local trackers send the updated hypothesis probabilities and conditional target state estimates to the global processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor tracking and data fusion deals with combining data from various sources to arrive at an accurate assessment of the situation. Difficulties in performing multisensor tracking and fusion include not only ambiguous data, but also disparate data sources. In this paper, a software package called FUSEDAT which deals with tracking and data association with multiple sensors is described. FUSEDAT is implemented in the MATLAB environment and is portable to different platforms that support MATLAB. This software package provides a simple yet flexible environment for multitarget multisensor tracking and fusion. It is intended for rapid system prototyping and performance evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm is presented for the on-line relative alignment of two 3D sensors (where a 3D sensor is one that measures range, bearing, and elevation) using common targets that are tracked by both sensors. The target data reported by the sensors are usually not time coincident and, consequently, the estimates from the tracking filters for the sensors will be at different times. Since the alignment algorithm requires time-coincident target data from the sensors, a one-step predictor is used to time translate the track estimates from one of the sensors to the times of the track estimates from the other sensor. A one-step fixed-lag smoothing algorithm is then used to improve the accuracy of the predicted estimate by processing the measurement from the stage one-step ahead. The time-coincident track estimates are passed to an alignment algorithm that estimates the alignment errors and then uses these estimates to compensate for the effects of the alignment errors in the multi-sensor data. For illustrative purposes, simulations will be used to compare the performances of an alignment algorithm based on the one-step predictor and one-step fixed lag smoother to one based solely on the one-step predictor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The R&D group at Paramax Systems Canada is working on a demonstration model of a multi- sensor data fusion (MSDF) implementation for the Canadian Patrol Frigate. The conditions are made very realistic by the use of the Software Test and Development Facility's situated on the premises in Montreal. The development is done on a SUN SPARC-10. The inputs are read from a Shipboard Integrated Processing and Display System bus via a monitor node and an interface card. The program consists of multiple separate UNIX processes communicating via the Inter Process Communication protocol. For evaluation purposes, two databases are updated corresponding to the MSDF and the usual tracking. The program fuses the tracks provided by the sensors as if they were contacts. The paper describes the overall MSDF design as well as the tracking and identification algorithms used in the project. The necessary compromises that were made are presented with explanations. A discussion of the expected results is included. The paper ends with the improvements that are already foreseen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One technique for multisensor tracking forms sensor level tracks using measurements received from the individual sensors. Then, the sensor level tracks are combined into a central level trackfile by performing multisensor track-to-track association and track fusion. Due to difference in sensor resolution, detection capability and coverage, there may be targets for which a track is formed by one sensor but not by the other. Also, tracks formed on the same target by multiple sensors may differ due to multisensor misalignment (or bias) error. This paper addresses these problems by developing a method to perform multisensor track-to-track association under the conditions of intersensor bias and missing track data. An augmentation to the association matrix is developed to account for the face that each sensor may not contain a full set of tracks for all targets in the field of view. An iterative approach is used to estimate and correct for the bias error. Monte Carlo simulation results are presented to illustrate the methods and close correspondence is found between these results and the theoretical probability of correct association.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within an Air Traffic Control (ATC) context many applications exist for off-line trajectory reconstruction such as evaluation of tracking or navigation systems and accident/incident investigations. Since the introduction of Jump-Linear models and associated filtering techniques enabled significant improvements in tracking small targets with useful applications to ATC one may expect that parallel developments in smoothing for Jump Linear systems enable similarly useful applications to aircraft trajectory reconstruction. The aim of this paper is to evaluate the validity of this expectation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of effective pointing-and-tracking systems is particularly difficult when the target is capable of evasive maneuvers. Traditional approaches employed to detect a change in maneuver acceleration rely on observing changes in the residual error process, which introduces a time lag between the onset of a maneuver and its detection. This problem is exacerbated when only passive bearing measurements are available because measurement localization errors are infinite in one dimension. Earlier studies using active (range and bearing) measurement models have suggested the use of an augmenting imaging sensor to determine target orientation, from which the likely direction of a maneuver acceleration can be inferred. This paper extends these earlier results to the case of passive (bearing only) measurements, again using an augmenting imaging sensor to estimate target orientation. This sequence of orientation measurements is used to infer the onset and direction of a maneuver acceleration. As in these earlier studies, orientation measurements provide significant value in localizing and tracking the target during a maneuver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A framework of multiresolutional target tracking is established in this paper. The wavelet transform is employed in constructing multiresolutional data and model structures. Multiresolutional tracking is performed over the multiresolutional data and model structures in a top-down fashion. The main advantages of multiresolutional target tracking include: computational efficiency, performance robustness and algorithm flexibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an algorithm to initiate tracks of a ballistic missile in the initial exoatmospheric phase, using line of sight measurements from one or more moving platforms (typically satellites). The major feature of this problem is the poor target motion observability which results in a very ill-conditioned estimation problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advent of parallel computing technology has opened up opportunities for new approaches to tracking systems. This paper describes a multiple-model system which is showing promising results when applied to the problem of end-to-end tracking of ballistic missiles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new exact recursive filter is derived for nonlinear estimation problems. The new nonlinear theory includes the Kalman filter as a special case. This filter is practical to implement in real- time applications, and it has a computational complexity that is comparable to the Kalman filter. The measurements are made in discrete time, but the random process to be estimated evolves in continuous time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target tracking systems which are in a loop with imaging sensors (FUR, TV) are being more and more frequently integrated as components in militaiy weapon projects. At present, there is no agreed international standard for the assessment of such automatic tracking systems. Current procedures involve costly field trials or lengthy laboratoiy analysis .This research, which has been conducted by the tnnational (F, G, UK) Joint Trials Tracker Subgroup of the Armoured Vehicles Electronic Working Group, was aimed at developing and proving the procedures and instrumentation required to produce a laboratory based assessment procedure for automatic trackers or tracking systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a likelihood technique for determining candidate target detections to pass to a tracker over successive temporal intervals. In a representative situation sensor data are available from each interval as matched-filter output sampled at discrete position-velocity state hypotheses. A likelihood ratio for an arbitrary target hypothesis from the continuous state domain can be constructed from the sampled filter output, and we seek local maxima in this likelihood-ratio field as the candidate detections. We obtain a readily implemented algorithm which closely follows this optimal prescription by limiting the sample points in the likelihood construction to the immediate vicinity of a discrete local maximum in the filter output.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have applied a feed-forward neural network to the task of resolving closely-spaced objects (CSO). Traditional algorithmic methods are computationally expensive or numerically unstable, and techniques based on ad hoc rules are too subjective. Our approach relies on the principle that a sufficiently complex neural network can approximate an arbitrary function to an arbitrary degree of accuracy. We train a neural network to approximate the multi- dimensional function that maps from detector signal space to CSO parameter space, using an aggressive Hessian-based training algorithm and training set examples synthesized from the known inverse function. We find two important empirical results: we can simultaneously identify when the training set size is sufficient to adequately represent the mapping function, and when the network has achieved optimum generalization capability, for a given degree of network complexity. Thus we can predict the network and training set sizes necessary to achieve a given mission performance. Finally, we show how such a network can be used to provide sub-pixel resolution capabilities for missions observing both single objects and CSOs, as part of a real-time 2D sensor processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes a new method to detect targets. The new method, called `super noncoherent integration' (SNCI), can improve overall detection performance by typically 5 dB to 10 dB relative to conventional noncoherent integration. A simple back-of-the-envelope formula is derived which quantifies the performance improvement of SNCI. Conventional noncoherent integration (CNCI) uses only amplitude measurements to distinguish targets from noise or clutter. In contrast, SNCI uses amplitude data in addition to: monopulse data, quadrature monopulse data, range and Doppler data over a sequence of N transmitted radar waveforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper derives a likelihood-ratio detector for bivariate time-series data having target- and correlated non-target-bearing components with randomly distributed, single-cell spike noise in one or both channels. After defining the detection problem explicitly, the detector is constructed as an analytic expression. A specific implementation of the approach is presented using a bivariate first-order autoregressive model for the correlation structure of the data. A computer model for the detector is constructed, and results using simulated data verify the usefulness of the approach in removing strong pike noise without damage to the target signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If members of a suite of sensors from which fusion is to be carried out are not co-located, it is unreasonable to assume that they share a common resolution cell grid; this is generally ignored in the data fusion community. In this paper we explore the effects of such `noncoincidence', and we find that what at first seems to be a problem can in fact be exploited. The idea is that a target is known to be confined to an intersection of overlapping resolution cells, and this overlap is generally small.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analytic expressions are derived based on tracker characteristics that determine the maximum scan frame time required to track a maneuvering target. The tracker characteristics chosen are: filter gain, measurement correlation gate size, target acceleration and measurement error variance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the fusing of data from radar and imaging sensors at the dynamic level by generating a common state and observation model that relates the measurements to the common state vector. A key point which is taken into account in the paper is the fact that a radar provides center-of-reflection measurements that may not correspond very well to the true target center, and as the target aspect changes, the center-of-reflection may vary about the true target center. For small targets at long range, this effect may be negligible, but for semi- extended or extended objects, the mismatch must be considered. In the approach developed in the paper, the radar center-of-reflection measurements are modeled in terms of random perturbations of unknown constant off-sets from the true target center in spherical coordinates. Estimate of these off-sets are used in an extended Kalman filter approach to target tracking using radar and imaging sensor data. The performance of the resulting tracking filter is evaluated via computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A sensor-level hard fusion system is currently being developed for the Canadian Patrol Frigate (CPF) within the simulation environment provided by the Combat System Test and Support Facility at Paramax. A number of operational criteria have been selected to evaluate and to compare the performance of the CPF's Command and Control System versus the data fusion system. The criteria operate on tracking and on identification in terms of time efficiency and precision. Since the implementation is done on a simulation environment, the target's ground truth is available (off-line in the current implementation). Some criteria are computed and displayed in real-time and the criteria needing access to ground truth are evaluated off-line. A subset of the performance evaluation criteria are presented and discussed. Implementation considerations are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A distributed parameter estimation algorithm is presented for a general nonlinear measurement model with additive Gaussian noise. We show that the Bayes-closed estimation algorithm developed by Kulhavy, when extended to the multisensor case leads to a linear fusion rule, regardless of the form of a local a posteriori densities. Specifically, the Kulhavy algorithm generates a set of reduced sufficient statistics representing the local sensor densities, which are simply added and subtracted at the global processor to obtain optimum fusion. We discuss various approximations to the Bayes-closed algorithm which leads to a practical parameter estimator for the nonlinear measurement model, and apply such an approximate technique to the bearings-only tracking problem. The performance of the distributed tracker is compared to an alternative algorithm based on the extended Kalman filter (EKF) implemented in modified polar coordinates. It is shown that the Bayes-closed estimator does not diverge in the sense of an ordinary EKF, and hence the Bayes-closed technique can be employed in both a unidirectional and bidirectional transmission mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since phased array radars have the ability to perform adaptive sampling of the target trajectory by radar beam positioning, proper control of the radar has the potential for significantly improving many aspects associated with the tracking of multiple maneuvering targets. When supported by additional sensors, the sampling of the phased array radar can be reduced significantly. However, controlling the revisit times of the phase array radar becomes more difficult. The technique proposed in this paper uses the Interacting Multiple Model algorithm to track maneuvering targets and control the radar revisit time and pointing when the radar is supported by a Precision Electronic Support Measures (PESM) sensor. Algorithms for tracking with multiple sensors, computing the radar revisit time, and pointing the radar are presented in this paper. Performance comparisons are given with the radar using adaptive data rates and the PESM providing measurements at regular intervals and intermittently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for the automatic formation of tracks is developed for maneuvering targets in cluttered environments. This track formation algorithm consists of Integrated Probabilistic Data Association Filters (IPDAFs) in an Interacting Multiple model (IMM) configuration, and it is referred to as the IMM-IPDAF algorithm. The IMM portion of the IMM-IPDAF will consist of several filters based on different dynamical models to handle target maneuvers. Each of the filters will be an IPDAF to deal with the problem of track existence in the presence of clutter. Although the primary purpose of this paper is to deal with the track formation problem, the IMM-IPDAF can also be used for the maintenance of existing tracks and the termination of tracks for targets when they disappear. For illustrative purposes, simulations will be used to compare the performance of the IMM-IPDAF algorithm to other track formation algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technique of detecting and tracking of a moving point target is space-based background with low SNR is analyzed in this paper. An efficient method for detecting low speed targets is presented. The method simply uses multi-frame accumulating as the main operation in improving the output SNR. And it introduces candidate target records for target matching and recognition. The method is easy for hardware implementation that meets the demands of real time processing and low power consumption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the detection of dim point targets in infrared images. Dim point targets detection is always a difficulty in information processing. Researchers have proposed many effective methods in this aspect, this paper no longer mentions them but will introduce a new method. Whereas difference method has obtained good results in 1D signal processing, this paper manages to apply it to 2D signal processing, that is to say, dim point targets detection in infrared images of low SNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fourier synthesis method of waveform generation for ultra wideband (UWB) radar overcomes several disadvantages of traditional impulse generation. In this method a signal is generated in frequency domain by summing the relatively low power harmonics of the desired signal instead of generating it by a single high power source in time domain. In this paper waveform generation is extended beyond simple baseband periodic pulses. Method for generation of complex amplitude coded baseband waveforms with accurate control of pulse shape and pulse repetition interval is described. This waveform allows pulse compression and coherent integration of UWB signals and thus further reducing the need for very large power sources which may be required for conventional impulse radar implementation. The paper also presents a UWB radar concept which incorporates frequency domain waveform generation. Signal processing issues for target detection are also addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an approach to the automatic detection of vehicles at long range using sequences of thermal infrared images is presented. The vehicles in the sequences can be either moving or stationary. The sensor can also be mounted on a moving platform. The target area in the images is very small, typically less than 10 pixels on target. The proposed method consists of two independent parts. The first part seeks for possible targets in individual images and then merges the results for a subsequence of images. The second part of the algorithm specifically focuses on finding moving objects in the scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Element Quality Analysis (EQA) is the evaluation of the accuracy with which an orbital element set describes the orbit of a satellite. This paper proposes practical methods for evaluating element quality and describes the benefits of improved EQA. The paper first discusses the need for and applications of EQA. Past and current operational methods for EQA are then considered. The main portion of the paper describes alternative methods for EQA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer aided analysis method is presented to predict the expected number of false tracks. Performance prediction of multiple target tracking is challenging because most aspects of performance do not readily lend themselves to analysis. In this paper a Markov chain analysis is used to predict the expected number of false tracks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The choice of gate size depends on the type of data association algorithm used and the optimization criteria. A prior well known analysis has presented a practical gate size for tracking approaches that sequentially select the most probable hypothesis. This paper revisits that analysis and presents an alternate approach. The intent of the gate sizing in this paper is to design the largest gate that eliminates any observation from the gate whose hypothesis probability is less than the null hypothesis probability. A study of the two approaches to sizing a gate reveals a dilemma and inconsistencies that under further scrutiny are resolved for reasonable conditions by decomposing the null hypothesis into two hypotheses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.