PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
First it is demonstrated in the continuous domain that least squares phase unwrapping can be expressed by Poisson equation. Therefore, solutions of the Poisson problem can be naturally employed to solve the 2D phase unwrapping. Then the error propagation in the least squares phase unwrapping is investigated by means of the Green's function due to the equivalence between the least squares and the Green's function methods. Because multigrid algorithm for solving weighted least squares equations can provide stable solutions with fast convergence, it is developed here to be a new synthetic algorithm with Branch Cut algorithm providing the weighting factors and the initial values as a change. The new synthetic algorithm combines the advantages of the Branch Cut and the Weighted Multigrid algorithm together and results on real X- SAR data of Etna mountain, Italy, finally confirm its stable and fast performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in the areas of phase history processing, interferometric SAR (IFSAR) processing algorithms, and radargrammetric adjustment have made it possible to extract extremely accurate Digital Elevation Model (DEM) information from SAR images. Results of tests using recent improvements by the authors in the phase unwrapping and interferogram conditioning steps show that it might be possible to obtain good elevation accuracy from noisy interferograms resulting from foliage or extreme terrain. Results of ERS-1/ERS-2 Tandem data are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we discuss the registration of two interfering SAR images based on the least square theory. A new measure is proposed to compute sum of phase difference of two SAR images, and is minimized to find the optimal registration parameters. In one of the previous studies, images from two satellite passes was registered by computing statistical correlation function between the two images over discrete pixel offset, then interpolating the correlation function to find its minimum position. Another approach was to interpolate the second image on a subpixel basis, evaluate the average fluctuation function of the phase difference image, then adjust the registration parameters according to the change in the average fluctuation function, and interpolate the second image again. The process is repeated until the average fluctuation function reaches its minimum. In our approach, registration is done according to the least square theory, computing sum of squares of phase difference of two SAR images. We can find optimal registration parameters when the sum is minimized. The least square registration is a method of subpixel accuracy and can get registration's specific accuracy. The registration's accuracy is 0.05 pixel to 0.07 pixel in our test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the next few years, there will be a substantial increase in the number of commercial space-based and airborne Synthetic Aperture Radar (SAR) systems and three-dimensional Synthetic Aperture Radar systems (Interferometric SAR:IFSAR). This will result in affordable, new types of data that can be used to complement other sensor systems, e.g., LandSat, SPOT, and in some cases, solve serious data collection deficiencies. The availability of this data has resulted in high interest in developing a commercial market for products derived from this data. This paper describes a methodology for developing such products and presents results from applying the methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Extended Fractal (EF) feature has been shown to lower the false alarm rate for the focus of attention (FOA) stage of a synthetic aperture radar (SAR) automatic target recognition (ATR) system. The feature is both contrast and size sensitive, and thus, can discriminate between targets and many types of cultural clutter at the earliest stages of the ATR. In this paper we modify the EF feature so that one can 'tune' the size sensitivity to the specific targets of interest. We show how to optimize the EF feature using target chip data from the public MSTAR database. We demonstrate improvements in performance for FOA algorithms that include the new feature by comparing the receiver operating characteristic (ROC) curves for all possible combinations of FOA algorithms incorporating EF, two-parameter CFAR, and variance features. Finally, we perform timing experiments on the fused detector to demonstrate the feasibility for implementation of the detector in a real system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a technique recently developed for target detection and false alarm reduction for the Predator unmanned aerial vehicle (UAV) tactical endurance synthetic aperture radar (TESAR) automatic target recognition (ATR) system. The approach does not attempt to label various objects in the SAR image (i.e., buildings, trees, roads); instead, it finds target-like characteristics in the image and compares their statistical/spatial relationship to larger structures in the scene. To do this, the approach merges the output of multiple CFAR (constant false alarm ratio) surfaces through a sequence of mathematical morphology tests. The output is further tested by a 'smart' clustering procedure, which performs an object- size test. With the use of these CFAR surfaces, a methodical sequence of morphological tests will find and retain large structures in the scene and eliminate cues that fall within these structures. The presence of supporting shadow downrange from the sensor is also used to eliminate objects with heights not typical to those of targets. Finally, a fast procedure performs a size test on elongated streaks. This procedure allows long objects to be smartly clustered as a single object while ensuring target proximity scenarios have no performance degradation. Application of this false alarm mitigator/detector to the Predator's SAR ATR algorithm suite produced a stunning reduction of one order of magnitude in the number of cues yielded by its baseline detector. This performance was consistent in scenes having natural and/or cultural clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we establish a data model for the feature extraction of point scatterers in the presence of motion through resolution cell (MTRC) errors and unknown noise, the data model is a sum of 2-dimensional sinusoidal signals with quadratic phase errors, which are caused by 'range walk' and 'variable range rate' respectively. Based on the data model, we propose a parametric RELAX-based algorithm to extract the target features when there are MTRC errors in radar imaging. The algorithm minimizes a complicated nonlinear least-squares (NLS) cost function, and it is performed alternately by letting only the parameters and errors of one scatterer vary and freezing all others at their most recently determined values. The Cramer-Rao bounds (CRB's) for the parameters of the data model are also derived. We compare the performance of the proposed algorithm with the CRB's by simulation. And the results show that the mean squared errors of the parameter estimates obtained by the algorithm can approach the corresponding CRB's. Then we apply the algorithm to the simulated radar data with MTRC errors. The proposed algorithm generates 'focused' point image with higher resolution, which conforms the algorithm and the data model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SAR/ISAR image processing involves a two-dimensional Fourier transform that may produce significant high intensity sidelobes which obscure low intensity scatterers in the image. Spatially variant sidelobe apodization is a technique that reduces sidelobe levels in a final Fourier image while maintaining the image resolution that would be obtained using the rectangular window. In this paper, a generalization of this technique based on the use of different parametric windows is proposed. Low sidelobe levels are obtained at the expense of increasing the complexity of the sidelobe apodization algorithm. Similar resolution and lower sidelobe levels were obtained using a one-dimensional example when compared to the spatially variant apodization technique. The method was also tested and results are shown when using this new sidelobe apodization technique with a two dimensional ISAR image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a detailed description of a non- parametric two-dimensional (2-D) procedure to extrapolate a signal, denoted Adaptive Weighted Norm Extrapolation (AWNE), and we propose its application for SAR image formation. The benefits of the AWNE procedure are shown when it is applied to the MSTAR targets database of images. Once the phase history is recovered, the AWNE method is applied to a subaperture or to the full set of frequency samples to extrapolate them to a larger aperture. Then, the Inverse DFT is applied to obtain the new complex SAR image. Use of the 2-D AWNE procedure proves to be superior to its one-dimensional version by reducing undesirable effects such as sidelobe interference, and variability in energy of the extrapolated data from row to row and from column to column. To assess the performance of AWNE in enhancing prominent scatterers, reducing speckle, and suppressing clutter, we compare the super-resolved images to the images formed with the traditional Fourier technique starting from the same frequency samples. Both images are also compared with images formed starting from less data to assess the quality of the extrapolation and to quantify the ability to recover from lost resolution. We quantify the performance with the help of a target mask produced by a CFAR detector using metrics such as peak location blob matching count and a mean minimum peak distance. Another focus of our experiments is the illustration of the potential advantages of going beyond the traditional limits of resolution by extrapolating the full aperture of phase history to a larger size. We quantify performance by visual comparison and by the use of a geometric constellation of prominent point scatterers of the targets extracted from the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A robust semi-parametric algorithm, referred to as SPAR (Semi- PARametric), is presented for feature extraction and complex image formation of targets consisting of both trihedrals and dihedrals via synthetic aperture radar (SAR). The algorithm is based on a flexible data model that models each target scatterer as a two-dimensional complex sinusoid with arbitrary unknown amplitude and constant phase in cross-range and with constant amplitude and phase in range. The proposed algorithm can be used to effectively mitigate the artifacts in the SAR images due to the flexible data model by attempting to deal with one corner reflector, such as one dihedral or trihedral, at a time. Another advantage of SPAR is that it can be used to obtain initial conditions needed by other parametric target feature extraction methods to reduce the total amount of computations needed. Both numerical and experimental examples are provided to demonstrate the performance of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present algorithms for feature extraction from complex SAR imagery. The features parameterize an attributed scattering center model that describes both frequency and aspect dependence of scattering centers on the target. The scattering attributes extend the widely-used point scattering model, and characterize physical properties of the scattering object. We present two feature extraction algorithms, an approximate maximum likelihood method that relies on minimization of a nonlinear cost function, and a computationally faster method that avoids the nonlinear minimization step. We present results of applying both algorithms on synthetic model data, on XPatch scattering predictions of the SLICY test target, and on measured X-band SAR imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In conventional synthetic aperture radar (SAR) systems, the image of a moving target is usually mislocated. In this paper, a dual-speed SAR imaging approach, i.e., the radar platform flies with two different speeds in the radar observation time duration, is proposed to resolve the above two problems, especially the mis-location problem. We also propose several practical approaches to the realization of the dual-speed radar platform. Some simulation results are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our goal is to study the properties of a random FM process correlator as applied to radar imaging. The driver of the transmitted FM signal is Gaussian, bandlimited random process while the initial phase of the process is uniformly distributed. Thus, the FM process is wide-sense stationary. For wideband modulation, both the autocorrelation and spectrum of the FM process are approximately Gaussian as inferred from Woodward's adiabatic principle. Since the half power bandwidth of the process is linearly proportional to the modulation index (lambda) , the range-delay resolution is inversely proportional to (lambda) . We make use of Monte Carlo simulations to illustrate the stationary nature of the correlator output. Radar imaging of rotating targets is implemented using a microwave tomography algorithm, which requires data collection for a finite number of viewing angles. We demonstrate that the self-noise power in this type of imagery is controlled by the number of samples processed and the number of signal realizations included in calculating the autocorrelations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms widely used for Doppler parameter estimations in SAR processing, show some limitations when platform parameters are time-varying. In this paper an alternative technique, based on the Wigner distribution, will be proposed, to reconstruct the full Doppler history of an emerging point target and then to estimate the Doppler rate in short time intervals (sub-apertures). Moreover, the Doppler centroid will be estimated by using instantaneous frequency corresponding to antenna boresight. Methods for the sub-aperture focalization of emerging target point are also presented. Results obtained with satellite and airborne SAR signals are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When the space-borne SAR works on the low-stability platform, the Doppler parameters (Doppler centroid and Doppler frequency rate) show variations during the azimuth illumination time, except that the variation of the Doppler parameters as a function of the range distance. That is to say, the azimuth reference function is two-dimensional variant (in range distance and azimuth), the conventional clutter lock and auto- focus for estimation of Doppler parameters are not sufficient for imaging. A model-based algorithm is proposed to exactly compensate the error introduced by the platform attitude instability, the efficiency of the algorithm is tested with computer simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The squint side-looking imaging mode SAR has a squint angle at the side-looking direction. This imaging mode can obtain multi-resolution images of a same scene with the variation of squint angle. But the imaging mode has to overcome three problems: (1) compensate the large range walk and curve; (2) develop a mapping technique that works even when the size, shape, and orientation of a resolution cell varies; and (3) fuse the different resolution images to obtain the best imaging results and the most messages of the imaging terrain. The paper analysis and present an imaging schedule including imaging algorithm, geometry correction method and a fusion method of different resolution images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new technique for FOPEN SAR (foliage penetration synthetic aperture radar) image formation of Ultra Wideband UHF radar data. Planar Subarray Processing (PSAP) has successfully demonstrated the capability of forming multi- resolution images for X and Ka band radar systems under MITRE IR&D and the DARPA IBC program. We have extended the PSAP algorithm to provide the capability to form strip map, multi- resolution images for Ultra Wideband UHF radar systems. The PSAP processing can accommodate very large SAR integration angles and the resulting very large range migration. It can also accommodate long coherent integration times and wide swath coverage. Major PSAP algorithm features include: multiple SAR sub-arrays providing different look angles at the same image area that can enable man-made target responses to be distinguished from other targets and clutter by their angle dependent specular characteristics, the capability to provide a full resolution image in these and other selected areas without the processing penalty of full resolution in non required areas, and the capability to include angle-dependent motion compensation within the image formation process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with multidimensional signal processing and image formation with FOliage PENetrating (FOPEN) airborne radar data which were collected by a Navy P-3 ultra wideband (UWB) radar in 1995 [Raw]. A commonly- used assumption for the processing of the P-3 data is that the beamwidth angle of the radar is limited to 35 degrees [Bes], [Goo]; provided that this assumption is valid, the PRF of the P-3 SAR system yields alias-free data in the slow-time Doppler domain. However, controlled measurements with the P-3 radar have indicated a beamwidth which exceeds 35 degrees [Raw]. In this paper, we examine a method for processing of the P-3 data in which the incorrect assumption that its radar beamwidth angle is limited to 35 degrees is not imposed. In this approach, a SAR processing scheme which enables the user to extract the SAR signature of a specific target area (digital spotlighting) is used to ensure that the resultant reconstructed SAR image is not aliased [S94], [S95], [S99]. The images which are formed via this method with 8192 pulses are shown to be superior in quality to the images which are formed via the conventional P-3 processor with 16386 pulses which was developed at the MIT Lincoln Laboratory [Bes]. In the presentation, we also introduce a method for converting the P-3 deramped data into its alias- free baseband echoed data; the signature of the Radio Frequency Interference (RFI) signals in the two-dimensional spectral domain of the resultant data is examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a case study in using parallel processing technology for large-scale production of Foliage Penetration (FOPEN) Synthetic Aperture Radar (SAR) imagery. The initial version of the FOPEN SAR image formation software ran on a Unix workstation. The research-grade parallel image formation software was transitioned into a full-scale remote processing facility resulting in a significant improvement in processing speed. The primary goal of this effort was to increase the production rate of calibrated, well-focused SAR imagery, but an important secondary objective was to gain insight into the capabilities and limitations of high performance parallel platforms. This paper discusses lessons that were learned in transitioning and utilizing the research-grade image formation code in a 'turn key' production setting, and discusses configuration control and image quality metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that radar scattering from an illuminated object is often dependent on target-sensor orientation. In typical synthetic aperture radar (SAR) imagery, such aspect dependence is lost during image formation. We apply a sequence of directional filters to the SAR imagery to generate a sequence of images which recover the directional dependence over a corresponding sequence of subapertures. The scattering statistics associated with geometrically distinct target- sensor orientations are then used to design a hidden Markov model (HMM) for the target class. This approach explicitly incorporates the sensor motion into the model and accounts for the fact that the orientation of the target is assumed to be unknown. Performance is quantified by considering the detection of tactical targets concealed in foliage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method of moments (MoM) analysis is developed for electromagnetic scattering from a dielectric body of revolution (BoR) embedded in a layered medium (the half-space problem constituting a special case). The layered-medium parameters can be lossy and dispersive, of interest for simulating the ground. To make such an analysis tractable for wideband applications, we have employed the method of complex images to evaluate the Sommerfeld integrals characteristic of the dyadic layered-medium Green's function. Scattering results from tree trunks are presented, where tree trunks are well represented as BoRs sitting atop a dielectric half-space. In addition, we use our rigorous MoM algorithm to examine scattering from multiple bodies. In this second study, the MoM matrix equations are derived for a BoR and two flat plate conducting targets. To simplify the analysis, the targets are situated in free space. An electric field integral equation (EFIE) formulation is employed in which the submatrices of the MoM matrix are uncoupled, and the current on each body is solved directly. The currents on each body are then recalculated within an outer iterative loop. This iterative solution procedure is shown to preserve the simplicity and attractiveness of an isolated BoR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A feature based approach is taken to reduce the occurrence of false alarms in foliage penetrating, ultra-wideband, synthetic aperture radar data. A set of 'generic' features is defined based on target size, shape, and pixel intensity. A second set of features is defined that contains generic features combined with features based on scattering phenomenology. Each set is combined using a quadratic polynomial discriminant (QPD), and performance is characterized by generating a receiver operating characteristic (ROC) curve. Results show that the feature set containing phenomenological features improves performance against both broadside and end-on targets. Performance against end-on targets, however, is especially pronounced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes a new method to detect man-made objects hidden under foliage or camouflage. The method is based on change detection and thus multiple revisits of the same area. It uses SAR image data provided by the low-frequency and ultra-wideband CARABAS SAR system which operate in the 20 - 90 MHz frequency range. Experimental results show a drastic reduction in false-alarm rate compared to methods based on single-pass SAR images. Small- to medium-sized trucks are consistently detected with a false-alarm rate of the order of 0.1 - 1 per km2. This level of false-alarm rate is quite sufficient for most military or civilian applications of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ATR algorithm based on Fishertemplates was applied to FOPEN SAR data, resulting in very good classification results. Along with applying ATR to the single polarization case, we also investigated the effects of superresolution and the combining of polarizations on ATR classification performance. For the multiple polarization cases, we compare the results to the average single polarization case (74% classification rate without superresolution and 82% classification rate with superresolution). (1) Polarizations combined via the PWF did not aid ATR, in fact it had an adverse effect on the classification rates. (2) The results based on combining via voting showed a marginal improvement for data with superresolution (85%) and showed detriment for data without superresolution (67%). (3) The results based on combining in data space, the results were on par for the case with superresolution (82%), for the case without superresolution, combining all three polarization and combining HH and VV had a significant improvement, with results on par with superresolution (83%), but combining HH/HV and VV/HV remained on par with the case with no superresolution. But, it is noted that combining in data space is a computationally expensive operation. (4) The results based on combining in feature space were on par for the case with superresolution (83%) and the case without superresolution (75%). Based on these investigations, we can summarize the results as follows: (1) superresolution improves the classifications rates for the single polarization case, (2) the use of superresolution and multiple (two or three) polarizations do not have any advantage over using superresolution alone, and finally, (3) the combinations of all three polarizations (without superresolution) does improve the classification rates over the single polarization case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Various new improvements to the MINACE distortion-invariant filter are considered. Detection (PD) and false alarms (PFA) improvements obtained are noted. PD improved by approximately equals 25%. Initial ROC data and algorithm fusion results indicate that these new filters can improve the performance obtained by other methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new contrast box algorithm for detection of ships in SAR survey (50 m resolution) data. It uses new guard band concepts and conditional contrast box parameter computations (these allow the algorithm to be modified for special problematic ship cases). Initial results are very attractive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The maximum average correlation height (MACH) filter and distance classifier correlation filter (DCCF) correlation algorithms are evaluated using the 10 class publicly released MSTAR database. The successful performance of these algorithms on a 3-class problem has been previously reported. The algorithms are optimized by design to be robust to variations (distortions) in the target's signature as well as discriminate between classes. Unlike Matched Filtering (or other template based methods), the proposed approach requires relatively few filters. The paper reviews the theory of the algorithm, key practical advantages and details of test results on the 10-class public MSTAR database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce an efficient end-to-end system for SAR automatic target recognition, giving particular emphasis to the discrimination and classification stages. The target discrimination method, which we present here, is based on the features extracted from the Radon transform. It is used to estimate length and width of the target for discriminating the object as target or clutter. Like the army research laboratory (ARL) and MIT Lincoln Laboratory (MIT/LL) approaches, our classification stage performs gray scale correlation on full resolution sub-image chips. The pattern matching references are constructed by averaging five consecutive spotlight mode images of targets collected at 1-degree azimuth increments. Morphology operations and feature clustering are used to produce accurate image segmentation. The target aspect is estimated to reduce the pose hypothesis search space. Our efficient end-to-end system has been tested using the public target MSTAR database. The system produces high discrimination and classification probabilities with relatively low false alarm rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Lincoln Laboratory baseline ATR system for synthetic aperture radar (SAR) data applies a super-resolution technique known as high-definition vector imaging (HDVI) before the input image is passed through the final target classification subsystem. In previous studies, it has been demonstrated that HDVI improves target recognition performance significantly. Recently, however, several other viable SAR image enhancement techniques have been proposed and discussed in the literature which could be used in place of (or perhaps in conjunction with) the HDVI technique. This paper compares the performance achieved by the Lincoln Laboratory template-based classification subsystem when these alternative image enhancement techniques are used instead of the HDVI technique. In addition, empirical evidence is presented suggesting that target recognition performance could be further improved by fusing the classifier outputs generated by the best image enhancement techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop a novel approach to target classification in synthetic aperture radar (SAR) imagery. In contrast to the conventional approach, in which grayscale test images are compared to templates using a mean-square error (MSE) criterion, we coarsely quantize the grayscale pixel values and then conduct maximum-likelihood (ML) classification using simple, robust statistical models. The advantage of this approach is that coarse quantization can preserve a great deal of discriminating information while simultaneously reducing the complexity of the statistical variation target SAR signatures to something that can be characterized accurately. We consider two distinct quantization schemes, each having its own merits. The first preserves the contrast among the target, shadow and background regions while sacrificing the target region's internal structural detail; the second preserves the target's shape and internal structural detail while sacrificing the contrast between the shadow and background regions. We postulate statistical models for the conditional likelihood of quantized imagery (one model per quantization scheme), identify model parameters from data, and then build and test ML target classifiers. For a number of challenging ATR problems examined in DARPA's Moving and Stationary Target Acquisition and Recognition (MSTAR) program, these ML classifiers are found to lead to significantly better classification performance than that obtained with the MSE metric, and as good or better than that obtained with virtually all competing MSTAR-developed approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In conventional SAR image formation, idealizations are made about the underlying scattering phenomena in the target field. In particular, the reflected signal is modeled as a pure delay and scaling of the transmitted signal where the delay is determined by the distance to the scatterer. Inherent in this assumption is that the scatterers are isotropic, i.e. their reflectivity appears the same from all orientations, and frequency independent, i.e. the magnitude and phase of the reflectivity are constant with respect to the frequency of the transmitted signal. Frequently, these assumptions are relatively poor resulting in an image which is highly variable with respect to imaging aspect. This variability often poses a difficulty for subsequent processing such as ATR. However, this need not be the case if the nonideal scattering is taken into account. In fact, we believe that if utilized properly, these nonideal characteristics may actually be used to aid in the processing as they convey distinguishing information about the content of the scene under investigation. In this paper, we describe a feature set which is specifically motivated by scattering aspect dependencies present in SAR. These dependencies are learned with a nonparametric density estimator allowing the full richness of the data to reveal itself. These densities are then used to determine the classification of the image content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We addressed the problem of classifying 10 target types in imagery formed from synthetic aperture radar (SAR). By executing a group training process, we show how to increase the performance of 10 initial sets of target templates formed by simple averaging. This training process is a modified learning vector quantization (LVQ) algorithm that was previously shown effective with forward-looking infrared (FLIR) imagery. For comparison, we ran the LVQ experiments using coarse, medium, and fine template sets that captured the target pose signature variations over 60 degrees, 40 degrees, and 20 degrees, respectively. Using sequestered test imagery, we evaluated how well the original and post-LVQ template sets classify the 10 target types. We show that after the LVQ training process, the coarse template set outperforms the coarse and medium original sets. And, for a test set that included untrained version variants, we show that classification using coarse template sets nearly matches that of the fine template sets. In a related experiment, we stored 9 initial template sets to classify 9 of the target types and used a threshold to separate the 10th type, previously found to be a 'confusing' type. We used imagery of all 10 targets in the LVQ training process to modify the 9 template sets. Overall classification performance increased slightly and an equalization of the individual target classification rates occurred, as compared to the 10-template experiment. The SAR imagery that we used is publicly available from the Moving and Stationary Target Acquisition and Recognition (MSTAR) program, sponsored by the Defense Advanced Research Projects Agency (DARPA).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Support vector machines (SVM) are one of the most recent tools to be developed from research in statistical learning theory. The foundations of SVM were developed by Vapnik, and are gaining popularity within the learning theory community due to many attractive features and excellent demonstrated performance. However, SVM have not yet gained popularity within the synthetic aperture radar (SAR) automatic target recognition (ATR) community. The purpose of this paper is to introduce the concepts of SVM and to benchmark its performance on the Moving and Stationary Target Acquisition and Recognition (MSTAR) data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this research is to exploit couplings between tracking and ATR systems employing high range resolution radar (HRRR) and moving target indicator (MTI) measurements. As will be shown, these systems are coupled via pose, kinematic, and association constraints. Exploiting these couplings results in a tightly coupled system with significantly improved performance. This problem deals with two different types of spaces, namely the continuous space kinematics (e.g. position and velocity) and the discrete space target type. A multiple model estimator (MME) was chosen for this problem. The MME consist of a bank of extended Kalman filters (one for each target type). The continuous space kinematics are dealt with via these extended Kalman filter. Further, the probability of each Kalman filter is computed and used to determine the corresponding discrete space target probability. Presented in this paper are empirical results that show improvement over conventional techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target classification using an adjusted relative phase MSE nearest neighbor classifier and multi-look RELAX measures will be studied. This approach is a significant modification to the standard MSE nearest neighbor classifiers that are currently being used. Complex train and test signature similarities are amplified via a multi-look RELAX algorithm to obtain features consistent with point scatterer parametric models. The parameters are then used to simulate phase histories to various lengths. Transformation to the range domain at original and improved resolutions and adjustment of relative phase to a minimum completes the final data preparation to input into a nearest neighbor MSE classifier. Significant classification performance gains over the baseline MSE classifier is observed and graphically illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The statistical feature-based (StaF) classifier is presented for robust high range resolution (HRR) radar moving ground target identification. The target features used for classification are the amplitude and location of HRR signature peaks. The peak features are not predetermined using the training data but are extracted on-the-fly from the observed HRR profile and are different for each target observation. A classifier decision is made after statistical evidence is accrued from each feature and across multiple looks. Decision uncertainty is estimated using a belief-based confidence measure. Classifier decisions are rejected if the decision uncertainty is too high since it is likely that the observed HRR profile is not in the classifier's target database. Robustness is achieved by using only peak features rather than the entire HRR profile (much of which is low-level scatterers buried in noise or simply noise) and by rejecting decisions with high uncertainty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target recognition with an HRR signature can be viewed as consisting of pre-discriminant, discriminant, and post discriminant phases. The large variability and feature uncertainty of HRR signatures can, to a good extent, be handled by detailed modeling of underlying physical and electro-magnetic phenomena. However, some signature and feature variabilities pass through and continue to exist in the post-discriminant phase. A decision about the class of a signature must account for this residual uncertainty. In this paper we demonstrate an evidence theory based method for post- discriminant decision-making phase that minimizes the effects of the signature variability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a classifier that updates its signature models as testing data arrive. This classification strategy has application to the train on synthetic data and test on measured data methodology prevalent in many ATR systems. Additionally, this type of classifier is applicable to situations where the fielded targets are variants of the targets on which the classifier was trained. The model adaptation is based on a robust estimator of the parameters in a linear subspace model. Like total least squares (TLS), this estimator allows for errors in both the data and in the subspace model. However, unlike total least squares, this estimator allows the perturbation of the model to be constrained. These constraints have simple geometric interpretations and allow for various levels of confidence in the a priori signal model. The estimators of this paper are also distinguished from TLS in that they are invariant to certain arbitrary scalings and rotations of the signal model. This property, which TLS does not possess, is shown to be essential for certain estimation and classification problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the performance of a multi-class, template-based, system-oriented High Range Resolution (HRR) Automatic Target Recognition (ATR) algorithm for ground moving targets. The HRR classifier assumes a target aspect estimate derived from the exploitation of moving target indication (MTI) mode target tracking to reduce the template search space. The impact of the MTI tracker target aspect estimate accuracy on the performance and robustness of the HRR ATR is investigated. Next, both individual and hybrid MTI/HRR and Synthetic Aperture Radar (SAR) model-based ATR algorithm results are presented. The hybrid ATR under consideration assumes the coordination of a multimode sensor to provide classification or continuous tracking of targets in a move- stop-move scenario. That is, a high-value moving target is tracked using the GMTI mode and its heading estimated. As the indicated target stops, the last GMTI tracker update is used to aid the SAR mode ATR target acquisition and classification. As the target begins to move again, the MTI-assisted HRR ATR target identification estimates are fused with the previous SAR ATR classification. The hybrid MTI/SAR/HRR ATR decision- level fusion provides a method for robust classification and/or continuous tracking of targets in move-stop-move cycles. Lastly, the baseline HRR ATR performance is compared to a QuickSAR (short dwell or non-square pixel SAR) ATR algorithm for varying cross-range resolutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High Range Resolution (HRR) Moving Target Indicator (MTI) is becoming increasingly important for many military and civilian applications involving the detection and classification of moving targets in the clutter background. For ground-based HRR radar, when targets are moving slowly or near-broadside and the coherent processing interval (CPI) (or dwell time) is not too long, the effects of range migration and range feature distortion can be ignored. Based on the above assumptions, relaxation-based algorithms, which are robust and computationally simple, are proposed in this paper for the HRR feature extraction of moving targets consisting of scatterers very closely spaced in range and common Doppler shift in the presence of stationary clutter. Numerical examples show that the proposed algorithms exhibit super resolution and excellent estimation performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the use of synthetic data for air-to-air High Range Resolution (HRR) radar. Target radar models are used to generate synthetic HRR signatures in order to attempt to classify targets when there is limited real-life measured data. Target models are made up of a finite set of reflective patches. Modeling these targets is often difficult and frequently produces synthetic signatures which are not sufficiently close to measured data. We describe two approaches to improve classification of targets given limited measured HRR radar profiles and lower fidelity synthetic (model-based) profiles. The first method explores the possibility of improving model fidelity given measured HRR data. Specifically we search for material coating reflectance adjustments which consistently improve the synthetic predictions. The second approach attempts to predict missing measured data from available measured data based on global properties of the synthetic data. This is accomplished by using splines to interpolate and extrapolate between measured data and based on the global features in the synthetic data. The second method has the advantage that the model need only show global trends in the profiles over various viewing angles to aid in profile prediction. We also present algorithms for the alignment and normalization of measured data to support the above algorithms. Our results show that model corrections made by our algorithm demonstrate interpolative and extrapolative properties over regions where no measured data are available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Model-based Moving and Stationary Target Recognition
The stability of bright peaks over the complex phase history of synthetic aperture radar (SAR) imagery is examined using a 'sub-aperture imaging' approach. The additional information obtained about the peaks can be applied to the following areas: improved peak matching in the automatic target recognition (ATR) problem; false-alarm determination; and identification of imaging artifacts, such as the sidelobes of bright peaks. To estimate peak-stability information, the complex phase history of the SAR image is broken up into a series of overlapping cross-range windows from which the peak information is extracted. The change in the peak characteristics from one window to the next is correlated to produce information related to the change in the amplitude of the peak over the entire aperture and also to estimate small motions of the peak over the aperture. These changes in peak characteristics are due to various phenomena as the synthetic aperture is swept out: scatterers interfere with each other, scatterer sources are created and destroyed, and aspect- dependent scatterers are interrogated differently with angle. This stability analysis has been incorporated into the online MSTAR system's Feature Extraction module. The products produced for the MSTAR system will be described. Applications of this method to the ATR problem will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DARPA's Moving and Stationary Target Acquisition and Recognition (MSTAR) program has shown that image segmentation of Synthetic Aperture Radar (SAR) imagery into target, shadow, and background clutter regions is a powerful tool in the process of recognizing targets in open terrain. Unfortunately, SAR imagery is extremely speckled. Impulsive noise can make traditional, purely intensity-based segmentation techniques fail. Introducing prior information about the segmentation image -- its expected 'smoothness' or anisotropy -- in a statistically rational way can improve segmentations dramatically. Moreover, maintaining statistical rigor throughout the recognition process can suggest rational sensor fusion methods. To this end, we introduce two Bayesian approaches to image segmentation of MSTAR target chips based on a statistical observation model and Markov Random Field (MRF) prior models. We compare the results of these segmentation methods to those from the MSTAR program. The technique we find by mapping the discrete Bayesian segmentation problem to a continuous optimization framework can compete easily with the MSTAR approach in speed, segmentation quality, and statistical optimality. We also find this approach provides more information than a simple discrete segmentation, supplying probability measures useful for error estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a mathematical framework for investigating the spatial covariance properties of wideband radar signatures from complex targets. The phrase 'complex target' is used to describe the target consisting of a large number of discrete scattering centers distributed over an electrically large volume (such as military air and ground targets). The spatial covariance properties of the scattered field are found to depend primarily on the volume density of the scattering centers and the scattering geometry. The results are presented within a general framework that includes bistatic scattering geometries. Characterizing signature covariance properties is important because they play a significant role in determining the performance of automatic target recognition systems as well as in the design of optimal detection and classification algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Like the hypothetical shadow watchers of Plato's cave, ATR researchers have spent years in the study of one and two- dimensional signals, collected from three dimensional targets. Three-dimensional geometric invariance theory of radar returns from moving targets gives us a new opportunity to escape the study of two-dimensional information which is present, with probability one, in the signals from any randomly moving target. Target recognition for moving targets is fundamentally harder than for stationary targets, if one remains in a two- dimensional paradigm. Viewing geometry calculations based on sensor flight lines become false, due to uncontrolled target rotations. Three-dimensional analysis shows that even the most optimal purely two-dimensional approach will generically construct false target measurements and distorted target images. But the geometric facts also show that all types of three-dimensional Euclidean invariants, such as true (not projected) lengths, surface areas, angles, and volumes of target components can be extracted from moving target data. These facts have profound implications for target recognition, and for the dynamic tracking of target movements, allowing target signals to be correlated by comparing fundamental three-dimensional invariants, which are not confounded by changing illumination directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many approaches to target recognition on SAR images employ model-based techniques. These systems incorporate computationally intensive operations such as large database probing or complex 3D renderings that are used to produce simulations that are compared against unknown targets. These operations would achieve a significant improvement in speed performance if the target poses were known in advance. A study that addresses the problem of estimating the poses of vehicles in SAR images is reported in this paper. A pose estimation algorithm suite is proposed that is based on a set of partially independent criteria. A statistical analysis of the performance obtained by employing the established criteria, both individually and in combination, is also conducted and the results are comparatively discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The focus of this paper is recognizing articulated vehicles and actual vehicle configuration variants in real SAR images from the MSTAR public data. Using SAR scattering center locations and magnitudes as features, the invariance of these features is shown with articulation (i.e. turret rotation for the T72 tank and ZSU 23/4 gun), with configuration variants and with a small change in depression angle. This scatterer location and magnitude quasi-invariance (e.g. location within one pixel, magnitude within about ten percent in radar cross- section) is used as a basis for development of a SAR recognition engine that successfully identified real articulated and non-standard configuration vehicles based on non-articulated, standard recognition models. Identification performance results are presented as vote space scatter plots and ROC curves for configuration variants, for articulated objects and for a small change in depression angle with the MSTAR data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) sensors have many advantages over electro-optical sensors (EO) for target recognition applications, such as range-independent resolution and superior poor weather performance. However, the relative unavailability of SAR data to the basic research community has retarded analysis of the fundamental invariant properties of SAR sensors relative to the extensive invariant literature for EO, and in particular photographic sensors. Prior work that was reported at this conference has developed the theory of SAR invariants based on the radar scattering center concept. This paper will give several examples of invariant configurations of SAR scatterers from measured SAR image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recognition of targets in synthetic aperture radar (SAR) imagery is approached from the viewpoint of an optimization problem. Features are extracted from SAR target images and are treated as point sets. The matching problem is formulated as a non-linear objective function to maximize the number of matched features and minimize the distance between features. The minimum of this function is found using a deterministic annealing process. Registration is performed iteratively by using an analytically computed minimum at each temperature of the annealing. Thus, the images do not need to be initially registered as any translational error between them is solved for as part of the optimization. We have also extended the initial objective function to incorporate multiple feature classes. This matching method is robust to spurious, missing and migrating features. Matching results are presented for simulated XPATCH and real MSTAR SAR target imagery demonstrating the utility of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A model-based system employing a computationally intelligent search strategy has been developed for classifying military vehicles in SAR imagery. The system combines pose detection, Evolutionary Programming (EP) methods, and Geometric Hashing (GH). The design is based on an information filtering process that progressively narrows the scope of the problem space while maximizing for success. While the current system has been trained to identify 12 military vehicles, the architecture is extensible to additional vehicle types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The DARPA/AFRL 'Moving and Stationary Target Acquisition and Recognition' (MSTAR) program is developing a model-based vision approach to Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR). The motivation for this work is to develop a high performance ATR capability that can identify ground targets in highly unconstrained imaging scenarios that include variable image acquisition geometry, arbitrary target pose and configuration state, differences in target deployment situation, and strong intra-class variations. The MSTAR approach utilizes radar scattering models in an on-line hypothesize-and-test operation that compares predicted target signature statistics with features extracted from image data in an attempt to determine a 'best fit' explanation of the observed image. Central to this processing paradigm is the Search algorithm, which provides intelligent control in selecting features to measure and hypotheses to test, as well as in making the decision about when to stop processing and report a specific target type or clutter. Intelligent management of computation performed by the Search module is a key enabler to scaling the model-based approach to the large hypothesis spaces typical of realistic ATR problems. In this paper, we describe the present state of design and implementation of the MSTAR Search engine, as it has matured over the last three years of the MSTAR program. The evolution has been driven by a continually expanding problem domain that now includes 30 target types, viewed under arbitrary squint/depression, with articulations, reconfigurations, revetments, variable background, and up to 30% blocking occlusion. We believe that the research directions that have been inspired by MSTAR's challenging problem domain are leading to broadly applicable search methodologies that are relevant to computer vision systems in many areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) image modeling tools are of high interest to Automatic Target Recognition algorithm evaluation because they allow the testing of ATR's over a wide range of extended operating conditions (EOCs). Typically, extended operating conditions include target aspect, target configuration, target obscuration, and background terrain variations. This paper discusses enhancements to the legacy XpatchES model and the development of a new integrated SAR prediction toolset for targets in realistic 3-D terrain settings. The prediction toolset includes 2-D and 3-D visual target configuration and scene layout capabilities, a terrain elevation profile interface, and scattering center based on- line target prediction. The toolset allows the insertion of synthetic target chips into both measured and synthetic background models. For the synthetic background case a 3-D terrain model is used to accurately compute local slope, shadowing and layover effects. Additionally, natural clutter returns are modeled from a statistical database derived from measured data. Model enhancements and sample imagery are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) image modeling tools are of high interest to Automatic Target Recognition (ATR) algorithm evaluation because they allow the testing of ATRs over a wider range of extended operating conditions (EOCs). Typical EOCs include target aspect, target configuration, target obscuration, and background terrain variations. Since the phenomenology fidelity of the synthetic prediction techniques is critical for ATR evaluation, metric development for complex scene prediction is needed for accurate ATR performance estimation. An image domain hybrid prediction technique involves the insertion of a synthetic target chip into a measured image background. Targets in terrain scenes will be predicted and compared with similar measured data scenarios. Shadow region histograms and terrain region histograms will be used to develop some first generation metrics for phenomenology validation of hybrid SAR prediction techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Model-based target recognition is a crucial element of any environment in which an extensive set of measured data is not available. With recent advancements in computational electromagnetics, model-based synthetic data has become the cornerstone of many radar based ATR systems. Due to time considerations, synthetic signatures are often generated using a time domain shooting and bouncing ray (SBR) technique. Model-based synthetic signatures closely match most features found in measured data and enable the evaluation of multiple target aspects, target configurations, and articulation in a variety of potential settings. However, there do exist discrepancies between synthetic signatures and measured signatures caused by various phenomena. These phenomena include nondeterministic reflectance characteristics, dynamic components, articulating control surfaces, CAD modeling error, and resonant components. This paper presents methods that decompose the SBR signature into a set of specific SBR histories in the form of a ray path tree (RPT). This decomposition enables observed signal errors to be mapped directly onto the target model components. In addition, the RPT defines an enhanced visual traceback capability and real- time signature generation. This paper will describe the algorithms used to form the RPT. Analysis of several CAD models using this tool will be presented. Conclusions on the applicability and limitations of the RPT will be drawn.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important problem driving much research in the SAR and model-based ATR communities is the generation and modification of target models for ATR system databases. We propose a method for generating or updating 3-D reflector primitive target models. We utilize an existing 2-D extraction algorithm to extract feature locations and classifications (such as scattering primitive type) from each image in a set of SAR data. We formulate the 3-D model generation in terms of a data association problem. We present an iterative algorithm, based on the expectation-maximization (EM) method, to solve the data association problem and yield a maximum likelihood estimate of target feature locations and types from the set of 2-D extracted features. Finally, we present examples and results for sets of simulated SAR imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a technique to extract the three-dimensional (3-D) bistatic scattering center model of a target at microwave frequencies from its CAD model. The method is based on the shooting and bouncing ray (SBR) technique and is an extension of our previous work on extracting the monostatic 3-D scattering center model of complex targets. Using SBR, we first generate the bistatic 3-D radar image of the target based on a one-look inverse synthetic aperture radar (ISAR) algorithm. Next, we use the image processing algorithm CLEAN to extract the 3-D position and strength of the scattering centers from the bistatic radar image. We test the algorithm by extracting bistatic 3-D scattering centers from several test targets and reconstructing bistatic signatures (RCS, range profile, ISAR imagery) using the bistatic scattering centers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The MSTAR automatic target recognition (ATR) system recognizes targets by matching features predicted from a CAD model against features extracted from the unknown signature. In addition to generating signature features with high fidelity, the online Predictor in the MSTAR system must provide information that assists in efficient search of the hypothesis space as well as accounting for uncertainties in the prediction process. In this paper, we describe two capabilities implemented in the MSTAR Predictor to support this process. The first exploits the inherent traceback between predicted features and the CAD model that is integral to the predictor to enable component-wise scoring of candidate hypotheses. The second is the generation of probability density functions that characterize the fluctuation of amplitudes in the predicted signatures. The general approach for both of these is described, and example results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We analyze the use of the beta distribution for the statistical characterization of the radar cross-section (RCS) of a complex target. Analysis consists of first generalizing a complex target as a set of component scatterers, each with a constant component RCS and a phase characterized by a uniform random variable. From this set of target-based component scatterers, estimates of the moments of the implied probability distribution function (pdf) on the RCS response of the full target are gathered, and used to fit a beta distribution. Two distinct methods of fitting the beta distribution are compared against the results of Monte-Carlo analysis over a variety of component scatterer sets. This comparison leads to estimates of the accuracy of each method of generating moments for the fitting of the beta distribution, and further, leads to the characterization of pathological cases for the use of the beta distribution in modeling complex target RCS. Resulting methods for the modeling of the RCS of a complex target are discussed in the context of model-based SAR ATR applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A probabilistic backscatter coefficient generating function (CGF) is introduced which produces realistic backscatter coefficient values for various terrain types over all incidence angles. The CGF was developed in direct support of a multi-layer 3-D clutter modeling effort which successfully incorporated probabilistic clutter reflectivity characteristics and measured terrain elevation data to enhance clutter suppression and improve Signal-to-Clutter Ratio performance in radar applications. This probabilistic clutter modeling approach is in sharp contrast to traditional 2-D modeling techniques which typically include deterministic backscatter characteristics and assume constant terrain features within regions of interest. The functional form and parametric representation of the CGF were empirically determined by comparison with published backscatter data for nine different terrain 'types,' including, soil and rock, shrubs, trees, short vegetation, grasses, dry snow, wet snow, road surfaces, and urban areas. The statistical properties of the output, i.e., the mean and standard deviation, match published measured values to the number of significant figures reported. Likewise, the CGF output frequency of occurrence closely matches measured terrain data frequency of occurrence; a Chi-Square test fails to reject the method at a 0.05 level of significance, indicating a high level of confidence in the results. As developed, the CGF provides a computationally efficient means for incorporating probabilistic clutter characteristics into both simple and complex radar models by accurately reflecting the probabilistic scattering behavior associated with real terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fundamental to the model-based paradigm of an Automatic Target Recognition (ATR) system is an accurate representation (a model) of the physical objects to be recognized. Detailed CAD models of targets of interest can be created using photographs, blueprints, and other intelligence sources. When created this way, the target CAD models are necessarily specific to a particular realization of the vehicle (namely, the serial number of the vehicle from which the CAD model was validated). Under realistic battlefield conditions, variations across targets of the same type (i.e. T72) may be quite drastic and may manifest themselves as significant differences in the sensor signatures. Given this variability between targets of the same type, the example CAD model, or 'exemplar' model, may not provide an adequate representation of the vehicle across the entire class. This paper discusses the development of class models for use in a model-based ATR for synthetic aperture radar (SAR). It documents the propagation of variability information into feature uncertainty, and comments on the performance of class models in the Moving and Stationary Target Acquisition and Recognition (MSTAR) model- based ATR system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
What makes synthetic aperture radar (SAR) automatic target recognition (ATR) hard? This question is explored by reviewing target, environment, and sensor variability and how they affect SAR images and target recognition in SAR images. Each of these categories of operating conditions (OCs) is reviewed first with a wide open 'real world' scope and then comparing that to the extensive MSTAR SAR data collections. The target OC review considers increasingly fine target categories, from class to type to versions, and then configuration, articulation, damage, and moving-part variants. The environment OC considers topological properties, that might affect 6-DOF pose and terrain obscuration, volumetric (vegetation, snow, ...) and surface scattering properties, and occlusion, layover, and adjacency issues. The sensor OC review is limited to outlining the important properties of SAR imaging systems, such as their variation in frequency, PRF, BW, polarization, depression, squint, SNR, etc. SAR and optical images are used to illustrate OC dimensions. This review is limited to open literature sources and is from an ATR rather than a domain expert perspective. However, this space of SAR ATR OCs will eventually need to be understood before we will truly know the SAR and ATR problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An objective of the ATR-community is to automate steps of the image exploitation process by using computers and machine- based reasoning systems to interpret, classify and characterize images; the primary performance goals being increased accuracy, reduced interpretation timelines, and vastly increased imagery throughput volume. The central theme of this paper is to explore the relationship between, complexity, structure, homogeneity and ATR difficulty to develop an image characterization system that is machine-based (ATR difficulty/complexity) in contrast to existing human- based (scene-content) systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target recognition research for Synthetic Aperture Radar (SAR) has been made easier with the introduction of target chip sets. The target chips typically are of good quality and consist of three regions: target, shadow and background clutter. Target chip sets allow recognition researchers to bypass the quality filtering and detection phases of the automatic recognition process. So, the researcher can focus on segmentation and matching techniques. A manual segmentation process using supervised quality control is introduced in this paper. Using 'goodness of fit' measures the quality of manual segmentation on SAR target chips is presented. Using the expected metrics associated with the manual segmentation process, the performance of automated segmentation techniques can be evaluated. The approach of using manual segmentation to evaluate the performance of automated segmentation techniques is presented by demonstrating the results on a simple automated segmentation technique that incorporates speckle removal and segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New advanced imaging systems will soon be capable of collecting enormous volumes of imagery, placing a significant burden on the imagery analysts (IAs) that exploit these data. ATRs and other image understanding tools offer a way to assist IAs in exploiting large volumes of imagery more effectively and efficiently. The Defense Advanced Research Project Agency (DARPA) Semi-Automated IMINT Processing (SAIP) Program focuses on these technologies to assist IAs in the timely exploitation of SAR imagery. The SAIP system is an integrated set of imagery exploitation tools designed to improve the capability of the IA to support military missions in a tactical environment. To assess the utility of the SAIP technology, a mix of live and playback exercises were conducted. IAs exploited the imagery with the assistance of the SAIP technology. As a benchmark for comparison, the same imagery was exploited in an operational exploitation system without the benefit of SAIP assistance. This paper presents the methodology for assessing exploitation performance and discusses issues related to scoring exploitation performance. The results of a recent assessment event illustrate the issues and provide guidance for future work in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MSTAR is a SAR ATR exploratory development effort and has devoted significant resources to regular independent evaluations. This paper will review the current state of the MSTAR evaluation methodology. The MSTAR evaluations have helped bring into focus a number of issues related to SAR ATR evaluation (and often ATR evaluation in general). The principles from MSTAR's three years of evaluations are explained and evaluation specifics, from the selection of test conditions and figures-of-merit to the development of evaluation tools, are reported. MSTAR now has a more mature understanding of the critical aspects of independence in evaluation and of the general relationship between evaluation and the program's goals and the systems engineering necessary to meet those goals. MSTAR has helped to develop general concepts, such as assessing ATR extensibility and scalability. Other specific contributions to evaluation methods, such as nuances in figure-of-merit definitions, are also detailed. In summary, this paper describes the MSTAR framework for the design, execution, and interpretation of SAR ATR evaluations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Similarity between model targets plays a fundamental role in determining the performance of target recognition. We analyze the effect of model similarity on the performance of a vote- based approach for target recognition from SAR images. In such an approach, each model target is represented by a set of SAR views sampled at a variety of azimuth angles and a specific depression angle. Both model and data views are represented by locations of scattering centers, which are peak features. The model hypothesis (view of a specific target and associated location) corresponding to a given data view is chosen to be the one with the highest number of data-supported model features (votes). We address three issues in this paper. Firstly, we present a quantitative measure of the similarity between a pair of model views. Such a measure depends on the degree of structural overlap between the two views, and the amount of uncertainty. Secondly, we describe a similarity- based framework for predicting an upper bound on recognition performance in the presence of uncertainty, occlusion and clutter. Thirdly, we validate the proposed framework using MSTAR public data, which are obtained under different depression angles, configurations and articulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition systems often have parameters that are estimated using training data. These parameters are then used in an implementation of the system as if they are the true parameters. The training sets consist of independent and identically distributed copies of the data given the target type. In an ideal case, we analyze the degradation in performance of such systems as a function of the size of the training sets. The training sets consist of independent and identically distributed copies of the data given the target type. The ideal performance is determined by the true parameters and is characterized in terms of a receiver operating characteristic (ROC) for a two-target problem. For a finite-sized training set the ROC curves fall below the ideal and converge to the ideal as the size of the training sets grows. Since in practical systems we have only a very limited amount of training data, it is desirable to quantify the degradation based on the size of the training sets. This will allow a prediction of the difference between performance obtained empirically and the optimal performance. Laplace approximations for the performance are explored. We study a Gaussian model in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper compares bootstrap techniques with prior probability synthetic data balancing to determine which method is more effective for SAR target recognition. A bootstrap method resamples from the original target data to present more target examples to the ATR for training. Prior probability synthetic data balancing prevents the double counting of information by just resampling the smaller set. However, prior probability synthetic data balancing necessitates equivalent distributions from data sets which reduces the data set to the size of the smaller set. A new type of receiver operating characteristic (ROC) curve, based on varying the proportion of target data in the data set is presented to compare the two methods. The paper demonstrates the implementation of the data balancing of two targets from the Moving and Stationary Target Acquisition and Recognition (MSTAR) data set using an entropy metric for target classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A physics-based target detection theory is developed for spotlight-mode operation of a synthetic aperture radar (SAR). The target return and clutter return models are constructed from electromagnetic scattering theory. A polar-format Fourier-transform processor for spotlight-mode SAR images with adjustable processing durations, and a Neyman-Pearson optimum whitening-filter processor for single-component targets are considered. Detection performance for these spotlight-mode SAR processors are compared with those for corresponding stripmap- mode SAR processors. Target detection theory for multi- component targets is also developed. For two conditions -- unknown component phases, and unknown component phases with uncertain component positions -- we develop likelihood-based detectors and numerically evaluate the associated receiver operating characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper continues the development of a fundamental, algorithm-independent view of the ATR performance that can be achieved using SAR data. Such ATR performance predictions are intended to enable evaluation of performance tradeoffs for SAR designs, including both parameter selections (e.g., bandwidth and transmit power) and added domains of SAR observation, such as 3-D, full polarimetry, aspect diversity, and/or frequency diversity. Using a Bayesian framework, we show target classification performance predictions for two tactical targets (either stationary with radar netting assumed deployed, or moving) using three different domains of observation: 1-D HRR (high-range-resolution radar), 2-D SAR, and 3-D SAR. Comparisons of the three domains are made at 3m, 1m, 0.5m and 0.3m range and cross-range resolutions. The discussion of 3-D SAR includes parameter tradeoffs of various height resolutions at the target, and various numbers of sensors. For each measurement modality, we list some of the unique sensitivities which could cause performance degradations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for estimating classification performance of a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system. Target classification is performed by comparing a feature vector extracted from a measured SAR image chip with a feature vector predicted from a hypothesized target class and pose. The feature vectors are matched using a Bayes likelihood metric that incorporates uncertainty in both the predicted and extracted feature vectors. We adopt an attributed scattering center model for the SAR features. The scattering attributes characterize frequency and angle dependence of each scattering center in correspondence the geometry of its physical scattering mechanism. We develop two Bayes matchers that incorporate two different solutions to the problem of correspondence between predicted and extracted scattering centers. We quantify classification performance with respect to the number of scattering center features. We also present classification results when the matchers assume incorrect feature uncertainty statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a method of analysis of model based automatic target recognition (ATR) algorithms, as a function of a number of important parameters of the system, including the number and size of the models, the correlations between models, the expected probability of detection of features, the rates of occurrence of unpredicted features, and the spatial resolution of the predicted features, as defined by a local spatial feature density. Analytical results for a two class problem are presented as a function of between-class correlation and feature localization accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have been studying information theoretic measures, entropy and mutual information, as performance metrics for object recognition given a standard suite of sensors. Our work has focused on performance analysis for the pose estimation of ground-based objects viewed remotely via a standard sensor suite. Target pose is described by a single angle of rotation using a Lie group parameterization: O (epsilon) SO(2), the group of 2 X 2 rotation matrices. Variability in the data due to the sensor by which the scene is observed is statistically characterized via the data likelihood function. Taking a Bayesian approach, the inference is based on the posterior density, constructed as the product of the data likelihood and the prior density for object pose. Given multiple observations of the scene, sensor fusion is automatic in the joint likelihood component of the posterior density. The Bayesian approach is consistent with the source-channel formulation of the object recognition problem, in which parameters describing the sources (objects) in the scene must be inferred from the output (observation) of the remote sensing channel. In this formulation, mutual information is a natural performance measure. In this paper we consider the asymptotic behavior of these information measures as the signal to noise ratio (SNR) tends to infinity. We focus on the posterior entropy of the object rotation angle conditioning on image data. We consider single and multiple sensor scenarios and present quadratic approximations to the posterior entropy. Our results indicate that for broad ranges of SNR, low dimensional posterior densities in object recognition estimation scenarios are accurately modeled asymptotically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.