PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Commercial availability of very high-resolution synthetic aperture radar (SAR) imagery will enable development of automatic target recognition (ATR) algorithms to exploit its rich information content. This availability also permits exploration of both empirical and first principles approaches for predicting ATR performance. This paper describes a recent collection of high resolution SAR imagery. It details the operating conditions represented by the data and provides recommended experiments designed to challenge ATR algorithms and performance prediction. This set of information, along with the imagery, is contained in a Problem Set that will be made available to the community. The imagery is from a Deputy Under Secretary of Defense (DUSD) for Science and Technology (S&T) sponsored collection using the Sandia National Laboratory and General Atomics Lynx Sensor. The Lynx is now available as a commercial off-the-shelf (COTS) sensor. It was designed for use in medium-altitude UAVs and manned platforms. It operates at Ku-band frequency in stripmap, spotlight, and ground moving target indicator modes. Imagery in this collection was collected at 4' resolution and was then also reprocessed to 1' resolution. The collection included several military vehicles with significant variation in target, sensor, and background conditions. Defined experiments in the Problem Set present ATR algorithm development challenges by defining development (training) sets with limited representation of operating conditions and test sets that explore the algorithm's ability to extend to more complex operating conditions. These challenges are critical to military employment of ATR because the real world contains much more variability than it will be possible to explicitly address in an algorithm. For example, neither the storage nor the search through an exhaustive bay of templates is achievable for any realistic application. Thus, advanced developments that allow robust performance in denied conditions will accelerate the transition of ATR to the field. Additional experiments in the Problem Set present challenges in ATR performance prediction. Here, the development imagery provides empirical data to support development of prediction approaches. Test imagery provides an opportunity to validate the prediction technique's ability to, for example, interpolate or extrapolate performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fully polarimetric high-resolution W-band target signature data has been collected on 7 high fidelity 1/16th scale model main battle tanks. Data has been collected at several different elevation angles and target poses. Additionally, targets have been measured both on 1/16th scale simulated ground terrain and in free-space. ISAR images were formed from this data for use in several different target identification algorithms. These algorithms include using the data in both linear and circular polarization. The results of the inter-comparisons of the data using different algorithms are presented. Where possible the data has been compared with existing W-band Full-scale field measurements. The data is taken using a 1.55THz compact range designed to model W-band. The 1.55THz transceiver uses two high- stability optically pumped far-infrared lasers, microwave/laser Schottky diode side-band generation for frequency sweep, and a pair of Schottky diode receivers for coherent integration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Model-based approaches to automatic target recognition (ATR) generally infer the class and pose of objects in imagery by exploiting theoretical models of the formed images. Recently, we have performed an evaluation of several statistical models for synthetic aperture radar (SAR) and have conducted experiments with ATR algorithms derived from these models. In particular, a one-parameter complex Gaussian model, classically used to model diffuse scattering, was shown to deliver higher recognition rates than a one-parameter quarter-power normal model on actual SAR data. However an extended, two-parameter quarter-power model was consistently a better fit to the data than a corresponding two-parameter Gaussian model. In this paper, we apply Rician, gamma, and K distribution models, which are two-parameter extensions of the complex Gaussian and quarter-power normal models, to ATR from SAR magnitude imagery. We consider maximum-likelihood estimation of unknown model parameters and apply the resulting training and testing algorithms to actual SAR data. We show that the K distribution model performs better than the Rician and gamma models for both large and small sample sizes. The one-parameter complex Gaussian model performed slightly better than the K model overall. For small sample sizes, this is likely due to the relative stability in estimating only one model parameter. For large sample sizes this is likely due to a lack of persistence in specular reflections over the large angular intervals required to obtain large samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper details a model building technique to construct geometric target models from RADAR data collected in a controlled environment. An algorithm to construct three-dimensional target models from a complex RADAR return expressed as discrete sets of scattering center coordinates with associated amplitudes is explained in detail. The model is a three-dimensional extension of proven RADAR scattering models that treat the RADAR return as a sum of complex exponentials. A Fourier Transform converts this to impulses in the frequency domain where the relative phase difference between scattering centers is a wrapped phase term. If the viewing sphere is sampled densely enough, the phase is unambiguously unwrapped. The minimum sampling interval is explicitly determined as a function of the extent of the target in wavelengths. A least squares solution determines the coordinates of each scattering center. Properties of the collection geometry allow the minimum sampling density of the viewing sphere to be increased, but at the cost of testing competing hypotheses to determine which one best fits the phase data. The complex RADAR return of a random object is created sampling a 1 degree(s) slice of the viewing sphere to validate the model-building algorithm All coordinates of the random object are extracted perfectly. Hopefully this algorithm can build three-dimensional scattering center models valid over the entire viewing sphere with each target represented as a discrete set of scattering centers. A rectangular window function associated with each scattering center would model persistence across the viewing sphere.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address the problem of modeling the electromagnetic scattering from targets at high frequencies as scattering centers. Scattering center models are low dimensional parametric models, that are of specific interest for automatic target recognition (ATR). The main problem with scattering center parameter estimation is the aspect dependence of range and amplitude of a scattering center. We will in this paper use scattering centers whose amplitude-aspect dependence are modeled as a polynomial of some degree, and whose range-aspect dependence are modeled as a second order polynomial. The Cramer Rao bound is derived for this problem, and estimation errors for some simple cases are illustrated. An iterative estimator for the parameters of this scattering center model is also presented, and some simple examples illustrate its performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new paradigm for feature extraction and segmentation of SAR imagery. Most of the existing segmentation algorithms explore the features based on the variations in image intensity, contrast and texture, mimicking human SAR scene analysts. Like medical ultrasound imaging, CT imaging and magnetic resonance imaging, the imaging modality of SAR is not consistent with the natural ability of human vision. That is why we need trained experts to analyze those medical images as well as SAR images. In the ATR application, SAR imagery will be processed and segmented by automatic computer algorithms without human analysts in the loop. Therefore, in order to fully utilize the capability of SAR as an advanced surveillance instrument, we need to develop a feature space that is based on the physics of SAR imaging modality, not the human visual perception. After the definition of feature space, we can process the SAR sensor data in the image domain or even before image formation. In this research, we try to focus on establishing a new SAR image segmentation processing paradigm based on the discrete frame theory. We will show the framework of the paradigm on a limited feature space covering some SAR attributes like targets and shadows. After setting up the feature space, we will develop a discrete frame to transform SAR sensor data into a feature space representation. The feature space representation consists of transform coefficients that indicate the location and strength of the features. Those transform coefficients can be further manipulated by some classification algorithms for ATR exploitation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many existing automatic target recognition (ATR) schemes do not fully exploit the information contained in the local spatial structure of targets. However, discussions with image analysts make it clear that the shape of a target alone provides a great deal of information about the nature of the target. A human analyst will identify component shapes that make up an observed target such as a long, thin barrel, corner structures and straight parallel and perpendicular edges. Spatial relationships between these morphological components can be used to recognize targets. This approach is robust to different target configurations and articulations because the analyst has built up in his / her mind a model of how the morphological components of a target interact. In contrast, established model-based ATR approaches can characterize the target for only one particular choice of configuration and articulation and the model must be exercised repeatedly to investigate the parameter space which can be a time consuming process. For this reason, the feasibility of identifying morphological features and using models for their spatial relationships to perform ATR has been investigated. Methods for extracting spatial information from SAR images of targets despite the associated speckle noise have been investigated. A means for incorporating this spatial information into a classification scheme has then been developed. It has been shown that significant ATR performance can be achieved on SAR images of real targets based only on localized spatial structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The monostatic VV and HH-polarized radar signatures of several targets and trees have been measured at foliage penetration frequencies (VHF/UHF) by using 1/35th scale models and an indoor radar range operating at X-band. An array of high-fidelity scale model ground vehicles and test objects as well as scaled ground terrain and trees have been fabricated for the study. Radar measurement accuracy has been confirmed by comparing the signature of a test object with a method of moments radar cross section prediction code. In addition to acquiring signatures of targets located on a smooth, dielectric ground plane, data have also been acquired with targets located in simulated wooded terrain that included scaled tree trunks and tree branches. In order to assure the correct backscattering behavior, all dielectric properties of live tree wood and moist soil were scaled properly to match the complex dielectric constant of the full-scale materials. The impact of the surrounding tree clutter on the VHF/UHF radar signatures of ground vehicles was accessed. Data were processed into high-resolution, polar-formatted ISAR imagery and signature comparisons are made between targets in open-field and forested scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army Research Laboratory has investigated the relative performance of three different target detection paradigms applied to foliage penetration (FOPEN) synthetic aperture radar (SAR) data. The three detectors - a quadratic polynomial discriminator (QPD), Bayesian neural network (BNN) and a support vector machine (SVM) - utilize a common collection of statistics (feature values) calculated from the fully polarimetric FOPEN data. We describe the parametric variations required as part of the algorithm optimizations, and we present the relative performance of the detectors in terms of probability of false alarm (Pfa) and probability of detection (Pd).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We explore the characteristics of chaos for wideband radar imaging. Chaos can be generated via non-linear functions that produce statistically independent samples with invariant probability density functions. By feeding this type of chaos to the input of a voltage-controlled oscillator, a stochastic frequency modulated signal with fractal features is generated. The FM signal is an ergodic and stationary process with initial random phase. The power spectral density of such signal is typically broadband. We show that the time autocorrelation associated with the FM signal provides high range resolution for zero Doppler and dies out rapidly for increasing Doppler shifts. Furthermore, we show that a set of realizations of the signal can be processed into a set of ambiguity surfaces that when averaged yield a low self-noise pedestal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of aspects of ultra-wideband radar target detection analysis and algorithm development are addressed. The first portion of the paper describes a bi-modal technique for modeling ultra-wideband radar clutter. This technique was developed based on an analysis of ultra-wideband radar phenomenology. Synthetic image samples that were generated by this modeling process are presented. This sample set is characterized by a number of physical parameters. The second portion of this paper describes an approach to developing a class of filters, known as rank-order filters, for ultra-wideband radar target detection applications. The development of a new rank-order filter denoted as a discontinuity filter is presented. Comparative target detection results are presented as a function of data model parameters. The comparative results include discontinuity filter performance versus the performance of median filtering and CFAR filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to resolve Synthetic Aperture Radar (SAR) images to finer resolutions than the system bandwidths classically allow is a tantalizing prospect. Seemingly superresolution offers something for nothing, or at least something better than the system was designed for if only we process enough or right. Over the years this has proved to be a rather popular area of investigation, generating a wide variety of algorithms and corresponding claims of performance. Nevertheless, the literature on the fundamental underlying principles of superresolution as applied to SAR has been rather anemic. This paper addresses the following questions: What exactly is superresolution? and What is not really superresolution, but perhaps more aptly described as image enhancement? Is true superresolution possible? and to what degree? What constrains superresolution? and very importantly, How should we objectively test whether an image is in fact superresolved? Whereas superresolution concepts offer the potential of resolution beyond the classical limit, this great promise has not generally been realized. That is not to say that many reported algorithms have no useful effect on images. True superresolution is defined herein as the recovery of true scene spectrum, that allows more accurate scene rendering. The analytical basis for superresolution theory is outlined, and the application to SAR is then investigated as an operator inversion problem, which is generally ill posed. Noise inherent in radar data tends to severely inhibit significant enhancement of image resolution. A criterion for judging superresolution processing of an image is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an evaluation of the impact of a recently proposed feature-enhanced synthetic aperture radar (SAR) imaging technique on automatic target recognition (ATR) performance. We run recognition experiments using conventional and feature-enhanced SAR images of military targets, in three different classifiers. The first classifier is template-based. The second classifier makes a decision through a likelihood test, based on Gaussian models for reflectivities. The third classifier is based on extracted locations of the dominant target scatterers. The experimental results demonstrate that feature-enhanced SAR imaging can improve the recognition performance, especially in scenarios involving reduced data quality or quantity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study investigates the information content of polarimetric SAR imagery for use within automatic target detection and classification algorithms. Key questions such as the stability of polarimetric information and its relationship with image resolution are addressed. Using relatively simple polarimetric features, such as the percentage of pure odd and even bounce scattering events, we show how it is possible to identify the differences between two classes of military vehicle. The use of the radial power spectral density is proposed as a measure of the spatial distribution of these odd and even bounce scattering events, again enabling the two classes to be distinguished.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is known that high-resolution synthetic aperture radar (SAR) imaging can be cast as a spectral analysis problem, and consequently a number of sophisticated spectral estimation methods have been applied to SAR imaging. These method include the classical Capon method and the closely related Amplitude and Phase Estimation (APES) algorithm. In this paper, we show how Capon and APES can be extended to deal with spectral analysis of periodically gapped (PG) data, i.e. data where samples are missing in a periodic fashion. This problem is highly relevant for SAR imaging with angular diversity since in that case the measured phase-history data matrix contains missing columns. Our extension of Capon and APES is based on a transform that maps a one-dimensional (1D) periodically gapped time-series into a uniformly sampled two-dimensional (2D) data set. We show that the stationarity properties of the 1D signal are left unchanged by the transformation, and as a result the conventional 2D Capon and APES methods can be applied to the transformed data. An associated inverse transform is used to extract the 1D spectral estimate from the 2D one. The new method is computationally and conceptually non-intricate and it does not involve any interpolation of the missing data. Despite its striking simplicity, numerical results indicate that the new method can be a promising tool for SAR imaging with angular diversity as well as for time-series analysis. In SAR applications, the new method may be particularly suitable for accurate imaging of a small region of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the DARPA Moving Target Feature Phenomenology (MTFP) data collection conducted at the China Lake Naval Weapons Center's Junction Ranch in July 2001. The collection featured both X-band and Ku-band radars positioned on top of Junction Ranch's Parrot Peak. The test included seven targets used in eleven configurations with vehicle motion consisting of circular, straight-line, and 90-degree turning motion. Data was collected at 10-degree and 17-degree depression angles. Key parameters in the collection were polarization, vehicle speed, and road roughness. The collection also included a canonical target positioned at Junction Ranch's tilt-deck turntable. The canonical target included rotating wheels (military truck tire and civilian pick-up truck tire) and a flat plate with variable positioned corner reflectors. The canonical target was also used to simulate a rotating antenna and a vibrating plate. The target vehicles were instrumented with ARDS pods for differential GPS and roll, pitch and yaw measurements. Target motion was also documented using a video camera slaved to the X-band radar antenna and by a video camera operated near the target site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-Range Resolution (HRR) radar modes have become increasingly important in the past few years due to the ability to form focused range profiles of moving targets with enhanced target-to-clutter ratios via Doppler filtering and/or clutter cancellation. To date, much research has been performed on using HRR radar profiles of both moving and stationary ground targets for Automatic Target Recognition (ATR) and Feature-Aided Tracking (FAT) applications. However, little work evaluating the correlation between moving versus stationary HRR profiles has been reported. This paper presents analytical comparisons between HRR profiles generated from a moving vehicle and profiles formed from Synthetic Aperture Radar (SAR) images of the identical stationary vehicle. The moving target HRR profiles are formed by integrating range-Doppler target images detected from clutter suppressed phase history data. The stationary target HRR profiles are formed from SAR imagery target chips by segmenting the target from clutter and reversing the image formation process. The purpose of this research is to identify which features, such as profile peaks, peak intensity, electrical length, among others, are common to profiles of the same target type and class and at the same imaging geometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new method for superresolution, feature-enhanced reconstruction of high range-resolution (HRR) radar profiles. We pose the problem of the formation of the HRR profiles from phase history data as an optimization problem. Resolution and feature enhancements are achieved by imposing non-quadratic regularizing constraints on the solution of the optimization problem. We present experimental results on synthetic and measured data, and compare the proposed method to currently available techniques. This analysis shows the ability of the proposed method to preserve high-resolution features such as the locations and amplitudes of the dominant scatterers in the HRR profile. This suggests that the technique may potentially help improve the performance of HRR target recognition systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Standard ISAR image focusing techniques utilize the magnitude and position of range domain target signatures to achieve translational motion compensation. Alternatively, range can be determined by analyzing the slope of the unwrapped phase function associated with the frequency domain signature of a moving target. Phase slope based method uses the phase of the target's echo transfer function to calculate a focal quality indicator while avoiding two-dimensional Fourier processing. Simple phase averaging of targets having low signal-to-noise ratio does not lead to a useful estimate of the signal phase. Maximum likelihood phase gradient estimation method is then utilized to determine the phase slope of the radar signature. The presence of an absolute minimum without local minima guarantees that the estimated motion parameters are an accurate representation of the target's motion and allows the use of a simple search procedure. Results show that parameter estimates for motion compensation obtained in this manner converge to a unique solution, thus providing focused ISAR imagery of the target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airborne radars tasked with tracking moving targets face the challenge of surveillance over large geographic areas where military vehicles will be interspersed with civilian traffic. There is a major need to develop robust, efficient, and reliable identification and tracking techniques to identify selected targets, and to maintain tracks for selected, critical targets, in dense target environments. Traditionally, tracking and identification have been considered separately - one may identify a target, and then track it kinematically to sustain the identification. The difficulty with separate identification and tracking is that neither is sufficient by itself to satisfy the demands of the other. Identification techniques applied to moving targets require some degree of evidence accrual, which requires the kinematic tracker to have a high degree of fidelity. Conversely, kinematic association could be aided considerably with high confidence single-look identification, but high confidence identification only builds up with several looks. What is needed is to incorporate the distinctive target signature information into the tracker, so that identification and tracking, or signature comparison and tracking, perform together as a unit. Kinematic moving target indication (MTI) trackers receive reports in the form of ground coordinates and Doppler (range rate) and attempt to maintain track of the moving targets by associating the reports to target tracks. In situations where different targets exhibit similar kinematics, the association logic used for track-report association may not yield the correct pairings. In such complex and challenging environments, the additional information arising from distinctive target signatures can be used to aid the tracking algorithms. The approach to moving target identification and feature-aided tracking (FAT) described here combines accumulated target classification information, obtained from HRR, with kinematic association scores to yield improved classification and improved association. When the target under track is one of a set for which classification templates are available, then the association between the reports and tracks can be aided by the similarity in identification between the report and track. When such information is not available, and therefore classification is not performed, then the association is aided by the similarity in radar signature between the report and track. In this paper, we will describe the basics of moving target classification and signature comparison. We will describe how this information may be incorporated into kinematic tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional image reconstruction of moving targets from one-dimensional radar information traditionally has been challenging. Range and Doppler (range rate) measurements of prominent, radar scattering centers are rich with information for image reconstruction. Target kinematics and structural rigidity produce invariant parameters that support mathematical reconstruction solutions. Relating two frames of reference, one based on the radar source and the other on target motion, generate innovative range and Doppler equations. Solutions of these equations provide the basis for determining the three-dimensional location of scattering centers and target image reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking and identification algorithms have been developed to track moving targets using high-range resolution (HRR) radar. Likewise algorithms exist to link moving target indicator (MTI) hits with synthetic aperture radar (SAR) images to follow targets that are in a move-stop-move scenario. Each of these algorithms have limitations in their abilities to maintain a consistent track of an object that is in a move-stop-maneuver scenario. For example, (1) there is only spatial information from which to link MTI hits with SAR data and in the other case (2) HRR track and ID algorithm would not capture stationary targets. Fusing these modes can provide for track consistency. When multiple targets exist, such as in a group tracking scenario, the spatial information to link moving and stationary returns would be difficult. The incorporation of moving HRR classification and stationary target identification information would enhance the MTI-SAR linking algorithm for multiple targets. While HRR is better suited than MTI for linking object track information to spatially stationary SAR information, the difficulty with relying solely on HRR information is that when a target goes through a maneuver in which it is turning, it may be temporarily stopped in a static-rotator case. This paper discusses an information-theory approach for identification of targets in 1-D HRR and 2-D SAR modes and its incorporation into a feature-aided tracking and ID algorithm for tracking a target which goes through a stationary, moving, and maneuvering dynamics. Results are presented for a group of highly maneuvering targets which travel in one direction, turn, and travel in another direction from which typical kinematic tracking algorithms based on HRR information would break down.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Extended Maximum Average Correlation Height (EMACH) filter and the Polynomial Distance Classifier Correlation Filter (PDCCF) are applied to the Moving and Stationary Target Acquisition and Recognition (MSTAR) database for detection and classification. Filter performance is evaluated for a ten-class problem. The generalization capabilities are examined by conducting tests for targets differing by serial numbers, in-plane rotation, and depression angle. For comparison, results were also obtained using the Maximum Average Correlation Height (MACH) filter, Distance Classifier Correlation Filter (DCCF), and Optimal Trade-off Synthetic Discriminant Function (OTSDF) filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop and evaluate robust model-based approaches to SAR ground vehicle combat ID by fusing multiple looks of the same vehicle collected at different angles. We compare the single look performance with our baseline decision-level multi-look fusion approach and with a more refined hypothesis-level method. Evaluation of the multi-look approaches indicates that there are significant target identification performance benefits. In this presentation, we will discuss both hypothesis-level fusion, where we accumulate evidence not only over target type but also of target pose, thereby ensuring consistent interpretation across all the images; and feature-level fusion, where we accumulate evidence over parts of the model, thereby correctly accounting for model region visibility across the multiple views. Finally, we present the performance tradeoffs of the different multi-look approaches that we have evaluated so far, and discuss their benefits and limitations. The performance assessment is based on extensive analysis that uses multi-look SAR imagery covering a broad range vehicle types and operating conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The focus of this paper is optimizing the recognition of vehicles in Synthetic Aperture Radar (SAR) imagery using multiple SAR recognizers at different look angles. The variance of SAR scattering center locations with target azimuth leads to recognition system results at different azimuths that are independent, even for small azimuth deltas. Extensive experimental recognition results are presented in terms of receiver operating characteristic (ROC) curves to show the effects of multiple look angles on recognition performance for MSTAR vehicle targets with configuration variants, articulation, and occlusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The term Content-Based appears often in applications for which MPEG-7 is expected to play a significant role. MPEG-7 standardizes descriptors of multimedia content, and while compression is not the primary focus of MPEG-7, the descriptors defined by MPEG-7 can be used to reconstruct a rough representation of an original multimedia source. In contrast, current image and video compression standards such as JPEG and MPEG are not designed to encode at the very low bit-rates that could be accomplished with MPEG-7 using descriptors. In this paper we show that content-based mechanisms can be introduced into compression algorithms to improve the scalability and functionality of current compression methods such as JPEG and MPEG. This is the fundamental idea behind Content-Based Compression (CBC). Our definition of CBC is a compression method that effectively encodes a sufficient description of the content of an image or a video in order to ensure that the recipient is able to reconstruct the image or video to some degree of accuracy. The degree of accuracy can be, for example, the classification error rate of the encoded objects, since in MPEG-7 the classification error rate measures the performance of the content descriptors. We argue that the major difference between a content-based compression algorithm and conventional block-based or object-based compression algorithms is that content-based compression replaces the quantizer with a more sophisticated classifier, or with a quantizer which minimizes classification error. Compared to conventional image and video compression methods such as JPEG and MPEG, our results show that content-based compression is able to achieve more efficient image and video coding by suppressing the background while leaving the objects of interest nearly intact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Forming images of aircraft using passive radar systems that exploit illuminators of opportunity, such as commercial television and FM radio systems, involves reconstructing an image from sparse samples of its Fourier transform. For a given flight path, a single receiver-transmitter pair produces one arc of data in Fourier space. Since the resulting Fourier sampling patterns bear a superficial resemblance to those found in radio astronomy, we consider using deconvolution techniques borrowed from radio astronomy, namely the CLEAN algorithm, to form images from passive radar data. Some deconvolution techniques, such as the CLEAN algorithm, work best on images which are well-modeled as a set of distinct point scatterers. Hence, such algorithms are well-suited to high-frequency imaging of man-made targets, as the current on the scatterer surface tends to collect at particular points. When using low frequencies of interest in passive radar, the images are more distributed. In addition, the complex-valued nature of radar imaging presents a complication not present in radio astronomy, where the underlying images are real valued. These effects conspire to present a great challenge to the CLEAN algorithm, indicating the need to explore more sophisticated techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Eastman Kodak Company conducts image quality monitoring of U.S. Government-operated Synthetic Aperture Radar (SAR) sensors. Our quality assurance methodology uses automated metrics in parallel with human analyst scoring of image quality factors to track quality trends in an image chain. A key feature of the program is that analysis is performed periodically on images selected from actual mission data. Historically, tasking the sensors to fly over calibrated test sites on such a regular basis has failed because of contention for collection resources from higher priority jobs. In addition, detected, 8-bit NITF data is often the only image product that is distributed. The scarcity of high radar cross-section (RCS) individual point scatterers as well as the lack of complex data provides challenges to the ability to estimate a key image quality parameter, the impulse response function (IPR). This paper discusses a method to isolate and aggregate signatures of multiple low signal-to-noise ratio IPRs in detected mission imagery. Measures of -3dB and -15dB IPR widths in range and azimuth have been realized along with estimates of far sidelobe levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing compression algorithms, primarily designed for visible electro-optical (EO) imagery, do not work well for Synthetic Aperture Radar (SAR) data. The best compression ratios achieved to date are less than 10:1 with minimal degradation to the phase data. Previously, phase data has been discarded with only magnitude data saved for analysis. Now that the importance of phase has been recognized for Interferometric Synthetic Aperture Radar (IFSAR), Coherent Change Detection (CCD), and polarimetry, requirements exist to preserve, transmit, and archive the both components. Bandwidth and storage limitations on existing and future platforms make compression of this data a top priority. This paper presents results obtained using a new compression algorithm designed specifically to compress SAR imagery, while preserving both magnitude and phase information at compression ratios of 20:1 and better.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The AFRL COMPASE Center has developed and applied a disciplined methodology for the evaluation of recognition systems. This paper explores an element of that methodology related to the confusion matrix as a tabulation of experiment outcomes and its corresponding summary performance measures. To this end, the paper introduces terminology and the confusion matrix structure for experiment results. It provides several examples - from current Air Force programs - of summary performance measures and their relationship to the confusion matrix. Finally it considers the advantages and disadvantages of these summary performance measures and points to effective strategies for selecting such measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Procrustes Analysis (least-squares mapping) is typically used as a method of comparing the shape of two objects. This method relies on matching corresponding points (landmarks) from the data associated with each object. Typically, landmarks are physically meaningful locations (e.g. end of a nose) whose relationship to the whole object is known. Corresponding landmarks would be the same physical location on the two different individuals, and therefore Procrustes analysis is a reasonable method of measuring relative shape. However, in the application of automatic target recognition, the correspondence of landmarks is unknown. In other words, the description of the shape of an object is dependent upon the labeling of landmarks, an undesirable characteristic. In an attempt to circumvent the labeling problem (without exhaustively computing the factorial number of correspondences), this paper presents a label-invariant method of shape analysis. The label-invariant method presented in this paper uses measurements which are related to the measurements used in Procrustes Analysis. The label-invariant approach of shape measurement yields near-optimal results. A relation exists between Procrustes Analysis and the label-invariant measurements, however the relationship is not one to one. The goal is to further understand the implications of the nearly optimal results, and to further glean these intermediate results to form a measure of shape that is efficient and one to one with the Procrustes metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.