A new concept is defined as General Key-Stone Transform, which can remove any order of range migration. Two new imaging algorithms are obtained according to two order of General Key-Stone Transform concept. One imaging algorithm based on two order of General Key-Stone Transform fits for broadside SAR, the other imaging algorithm based on two orders General Key-Stone Transform fits for side-look SAR. Practice data processing shows that both of them are good SAR imaging algorithms.
Wave number Domain Imaging algorithm can deal with the problem of foliage-penetrating ultra-wide band synthesis aperture radar (FOPEN UWB SAR) imaging. Stolt interpolation is a key role in Imaging Algorithm and is unevenly interpolation problem. There is no fast computation algorithm on Stolt interpolation. In this paper, A novel 4-4 tap of integer wavelet filters is used as Stolt interpolation base function. A fast interpolation algorithm is put forwards to. There is only plus and shift operation in wavelet interpolation that is easy to realize by hardware. The real data are processed to prove the wavelet interpolation valid for FOPEN UWB SAR imaging.
In this paper, we propose to unify the Bayesian estimation strategy with the statistical regularization-based techniques for image reconstruction through developing the fused Bayesian-regularization (FBR)method for the high resolution estimation of the spatial spectrum pattern (SSP) of the wave field scattered from the probing surface. The problem is treated as it is required for enhanced radar imaging of the remotely sensed scenes via processing one sampled realization of the SAR trajectory signal. The derived optimal FBR estimator is a nonlinear solution-dependent (thus referred to as an adaptive) algorithm that also permits a concise robust simplication to the non-adaptive easy-to-implement imaging techniques. The optimal and robustified suboptimal SSP estimation algorithms imply formation of the second order sufficient statistics from the SAR trajectory data signals and their smoothing applying the window operators. The new formalism of such the sufficient statistics and windows explaining their adjustment to the metrics in a solution space, a priori nonparametric model of the desired SSP, its correlation properites and imposed regularization constraints is developed. The advantage in using the proposed method is demonstrated through simulations of enhancing the SAR images using a family of the robustified FBR-based imaging algorithms.
We consider migration based synthetic aperture radar (SAR) imaging
of surfaced or shallowly buried objects using both down-looking and
forward-looking ground penetrating radar (GPR). The well-known
migration approaches devised to image the interior of the earth are
based on wave equations and have been widely and successfully used
in seismic signal processing for oil exploration for decades. They
have exhibited great potentials and convenience to image the
underground objects buried in complicated propagation medium.
Compared to the ray-tracing based SAR imaging methods, the migration
based SAR imaging approaches are more suited for the imaging of the
underground objects due to their simple and direct treatment of the
oblique incidence at the air-ground interface and the propagation
velocity variation in the soil. In this paper, we apply the
phase-shift migration approach to both the constant-offset and the
common-shot experimental data collected by the PSI (Planning Systems
Inc.) GPR systems. We will address the spatial aliasing problems
related to the application of migration to the GPR data and the spatial zero-padding approach to circumvent the problem successfully.
Many applications which process radar data, including automatic target recognition and synthetic aperture radar image formation, are based on probabilistic models for the raw or processed data. Often, data collected from distinct directions are assumed to represent independent observations. This assumption is not valid for all data collection scenarios. A range of models can be developed that allow for successively more complex dependencies between measured data, up to deterministic computational electromagnetic models, in which observations from different orientations have a known relationship. We consider models for the autocovariance functions of nonstationary processes defined on a circular domain that fall between these two extremes. We adopt a model of covariance as a linear combination of periodic basis functions and address maximum-likelihood estimation of the coefficients by the method of expectation-maximization (EM). Finally, we apply these estimation methods to SAR image data and demonstrate the results as they apply to target recognition.
Synthetic Aperture Radar systems are being driven to provide images with ever-finer resolutions. This, of course, requires ever-wider bandwidths to support these resolutions in a number of frequency bands across the microwave (and lower) spectrum. The problem is that the spectrum is already quite crowded with a multitude of users, and a multitude of uses. For a radar system, this manifests itself as a number of 'stay-out' zones in the spectrum mandated by regulatory agencies; frequencies where the radar is not allowed to transmit. Even frequencies where the radar is allowed to transmit might be corrupted by interference from other legitimate (and/or illegitimate) users, rendering these frequencies useless to the radar system. In a SAR image, these spectral holes (by whatever source) degrade images, most notably by increasing objectionable sidelobe levels, most evident in the neighborhood of bright point-like objects. For contiguous spectrums, sidelobes in SAR images are controlled by employing window functions. However, those windows that work well for contiguous spectrums don't seem to work well for spectrums with significant gaps or holes. In this paper we address the question "Can some sorts of window functions be developed and employed to advantage when the spectrum is not contiguous, but contains significant holes or gaps?" A window function that minimizes sidelobe energy can be constructed based on prolate spheroidal wave functions. This approach is extended to accommodate spectral notches or holes, although the guaranteed minimum sidelobe energy can be quite high in this case.
Using the high-frequency terahertz compact range developed recently for measurement of polarimetric return of scale modesl of tactical targets, we have developed several techniques to produce 3D data sets. Fully polarimetric 3D ISAR data has been collected on several 1/16th scale model tactical targets in free space at individual look angles. The 3D scattering coordinates are calculated by viewing the target through a 2D angular aperture in both azimuth and elevation while simultaneously performing a linear frequency chirp to measure the down-range coordinate. Due to the high frequency of W-band radar, this technique produces high-resolution cross-range images from relatively small (approximately 1 degree) angular integrations. Several techniques for calculation of the 3D coordinates have been developed. In addition to the technique described above, a new method utilizing the phase change of the scattering centers due to differentially small changes in angle will be described. Data collected using this technique can be processed to produce 3D scattering information similar to that obtained by monopulse systems. Results from this analysis will be shown.
This paper discusses reconstruction of three-dimensional surfaces from multiple bistatic SAR images. Techniques for surface reconstruction from multiple monostatic SAR images already exist, including stereo SAR and interferometric processing. We generalize these methods to obtain algorithms for bistatic stereo SAR and bistatic interferometric SAR. We also propose a framework for predicting the performance of our multistatic stereo SAR algorithm, and from this framework, we suggest a metric for use in planning strategic deployment of multistatic assets.
VV and HH-polarized radar signatures of several ground targets were acquired in the VHF/UHF band (171-342 MHz) by using 1/35th scale models and an indoor radar range operating from 6 to 12 GHz. Data were processed into medianized radar cross sections as well as focused, ISAR imagery. Measurement validation was confirmed by comparing the radar cross section of a test object with a method of moments radar cross section prediction code. The signatures of several vehicles from three vehicle classes (tanks, trunks, and TELs) were measured and a signature cross-correlation study was performed. The VHF/UHF band is currently being exploited for its foliage penetration ability, however, the coarse image resolution which results from the relatively long radar wavelengths suggests a more challenging target recognition problem. One of the study's goals was to determine the amount of unique signature content in VHF/UHF ISAR imagery of military ground vehicles. Open-field signatures are compared with each other as well as with simplified shapes of similar size. Signatures were also acquired on one vehicle in a variety of configurations to determine the impact of monitor target variations on the signature content at these frequencies.
The extensive knowledge of scenes is necessary for change detection or mission planning. Unfortunately, InSAR images have zones with full or partial loss of information, e.g. in shadow areas. We have developed an algorithm for registration and fusion of multi-aspect InSAR images to replace shadow zones in the image of the first aspect by corresponding data taken from another aspect. The investigations were carried out with using of X band images obtained for two aspects. The two aspect images were recorded from two contrary flight courses. The images were delivered as intensity and unwrapped phase of forest, rural and urban terrain. The registration is based on the consecutive application of three separate matching algorithms: matching straight lines, matching contour sequences, and matching based on the Fourier-Mellin transform. The different matching algorithms are optimized for working with the SAR images having different contents. The main peculiarities of the registration algorithm are the methods of structural matching brightness contours calculated by standard edge filters. The shadow borders are excluded from the gradient field by a heuristic algorithm. The contours are extracted by a watershed- or a maximum gradient tracking algorithm. The matching is very robust. It is rather computationally expensive, thus a hierarchical structural matching is used to decrease the computational complexity. The multi-stage contour matching provides a fast and reliable registration and fusion.
Recently, curvelets, finite ridgelets, bandlets, and beamlets have been suggested as transforms that capture more information than traditional wavelet transforms for two or higher dimensional images. In this work, we explore several of these transforms with some new variants. In particular, we study the effectiveness of these transforms in reducing a particular type of noise known as speckle that is present in synthetic aperture radar (SAR) imagery.
We used a higher-order correlation-based method for signal denoising of images corrupted by multiplicative noise. Using the logarithm of an image, we applied a third-order correlation technique for identification of wavelet coefficients that contained mostly signal. In our approach, we examined wavelet coefficients in an environment where the contribution from the second-order moment of the noise had been reduced. Our results compared favorably and were less sensitive to threshold selection when compared to a second-order wavelet denoising method.
In this paper, we present an unsupervised texture segmentation algorithm for Synthetic aperture radar (SAR) images based on a multiscale modeling over images in wavelet pyramidal structure. An image consisting of different textures can be considered as a realization of a collection of two interacting random process-the hidden region label process and the observation process. A novel Gaussian Markov random field (GMRF) model is proposed to describe the fill-in of regions at each scale and a multi-level logistic (MLL) MRF model with particular cliques is used to characterize the intrascale and interscale context dependencies. According to sequential maximum a posterior (SMAP) estimate, expectation-maximization (EM) algorithm is adopted to estimate the parameters of GMRF and to label each pixel iteratively from coarse to fine level. The proposed segmentation approach is applied to synthetic image and SAR image and the result shows its performance.
With the development of SAR processing techniques, high image precision and high real time rate have become an important index, especially on the military field. This paper presents a medium grained parallel processing algorithm where every processing stage is done in parallel, and the degree of parallelism is task-level. It is fit for the parallel computer with good communication capacity. The experiments on DAWNING3000 shows this parallel processing algorithm can get good results on real time rate and processing efficiency.
Based on the difference between the forward-looking mode and the side-looking mode of spaceborne SAR system, this paper carefully analysis the Doppler properties of the forward-looking SAR (FSAR) mode: Doppler frequency and Doppler bandwidth etc; And point out the relation of the azimuth resolution of FSAR with carrier frequency, platform altitude and azimuth range in the surface of the Earth etc. and the azimuth resolution of FSAR isn't relate with antenna aperture length; And carefully discuss that these factors affect the azimuth resolution and that FSAR constrain to pulse repetition frequency and elevation angle; And explain the area of high resolution imaging and the request of the FSAR for the radar platform altitude.
We consider the problem of identification of airborne objects from high-range-resolution radar data. We use high-frequency asymptotics to show that certain features of the object-correspond to identifiable features of the radar data. We study the cases of single scattering and scattering from re-entrant structures such as ducts. This work suggests a method for target identification that circumvents the need to create an intermediate radar image from which the object's characteristics are to be extracted. As such, this scheme may be applicable to efficient machine-based radar identification programs.
In this paper we raise some questions about the nature and consequences of the signal model underlying SAR/ISAR imaging. One here describe a target with an object function defined over space. The measurements from a sensor is described by some other function, that are related to the object function by an operator that describe the sensor. Imaging is then the inverse problem of finding an approximation to the object function, i.e. the image, given the incomplete measurements. The usual SAR/ISAR object function is a continuous distribution of isotropic point scatterers. This distribution need to be a generalized function in order to describe the observed scattering in some cases, not only for hypothetical point scatterers, but also for a simple object such as a plate. A generalized function is of course not a true function, and there is a conceptual difficulty in viewing an image as an approximation of such an object function. A common practice is to produce calibrated images in the sense that the radar cross section of an isotropic point scatter can be directly read from the level in a magnitude-squared image. We compute such calibrated images for some simple objects such as spheres, plates and dihedrals, and show that they produce levels that not easily can be interpreted. Instead, a non trivial mix of object characteristics and imaging system characteristics such as bandwidth and aperture length influence the level at a certain image point. Even the over all appearance of the image can change. More sophisticated, "super resolving", signal processing methods postulates statistical models for the targets, and we briefly review the assumptions behind some such methods. All such methods rely on the modeling of prior information about the observed objects. This is not easy to achieve using images, which have quite a varying form even for simple objects. As an alternative, if we are prepared to leave the imaging paradigm, scattering center models give a possibility to accurately model a number of scattering phenomena. A very brief review of this promising approach is given.
Joint time-frequency analysis is applied to radar imaging
problems. Special attention is given to imaging applications, for
which the resolution is severely limited due the available
bandwidth of the radar signal both in range and cross-range. This
includes the detection of landmines as well as foliage penetration
radar imaging. Motivated by this type of imaging problem a new
joint time-frequency method, the STPDFT algorithm is introduced
and compared with existing methods. The performance of all methods
is illustrated with synthetic test signals. In addition,
preliminary results are presented which demonstrate the
performance of joint-time frequency transforms, if applied to low
resolution imaging problems.
We present an evaluation of the impact of a recently developed point-enhanced high range-resolution (HRR) radar profile reconstruction method on automatic target recognition (ATR) performance. We use several pattern recognition techniques to compare the performance of point-enhanced HRR profiles with conventional Fourier transform-based profiles. We use measured radar data of civilian ships and produce range profiles from such data. We use two types of classifiers to quantify recognition performance. The first type of classifier is based on the nearest neighbor technique. We demonstrate the performance of this classifier using a variety of extracted features, and a number of different distance metrics. The second classifier we use for target recognition involves position specific matrices, which have previously been used in gene sequencing. We compare the classification performance of point-enhanced HRR profiles with conventional profiles, and observe that point enhancement results in higher recognition rates in general.
The problem of predicting HRR radar and SAR signal magnitudes based on a limited number of observations is a challenging component of feature aided tracking. In this paper we describe the application of a scattering-based tomographic technique that builds persistent scatterer models of ground vehicles from a collection of HRR and/or SAR observations from varying look angles. Results are obtained using MSTAR data. Target detection results are shown using ROC curves and compared with nearest observation matching. Application of these techniques to the move-stop-move problem of vehicle tracking is also described.
The paper describes design principles and presents first results for the airborne LORA (low-frequency radar) system. It covers operating frequencies in the VHF and UHF bands and has both synthetic-aperture radar and ground moving target indication modes. The main motivation for the system is to facilitate detection of man-made targets in a wide range of conditions, i.e. stationary or moving targets as well a targets in open terrain or in concealment under foliage or camouflage. The LORA system will operate in several configurations extending from 20 MHz to 800 MHz. Initial flight trials in 2002 were successfully conducted using the 200-400 MHz band. SAR images have been formed from the acquired data and are presented. A second band, 400-800 MHz, has also been completed but has not yet been tested in -flight. A third band, 20-90 MHz, is presently being added and will be completed during 2003. The paper also includes results from a recent experiment in northern Sweden which included an extensive target deployment to cover a broad range of operating conditions. VHF-band SAR (20-90 MHz) is compared with high-resolution Ku-band SAR. Results show the superior area-coverage rate of using VHF-compared to Ku-band for robust detection of stationary targets. The high-resolution images provided by the Ku-band SAR are, however, superior for classification and recognition purposes.
We focus on the specific problem of inversion of a collection of one-dimensional projections to reconstruct a three-dimensional image of a rigid body and estimate its orientation. We assume correspondence of the points in the projections is given and derive several useful results leading to an iterative solution algorithm and a fundamental understanding of its possible scope. Finally, the algorithm was simulated on synthetic HRR data which verified its function and convergence.
We describe an algorithm for class-independent automated target recognition (ATR) and association using range-Doppler images of moving targets and SAR images of stationary targets. This algorithm can be used both for target identification (by comparison against a pre-existing database of measurements of all potential targets) and target association (not requiring a pre-existing database). The algorithm computes a one-dimensional signature for each received range-Doppler image; these signatures are stored in a database for comparison against other detections. The signatures used in our algorithm are range profiles, generated from the clutter-suppressed, filtered image by incoherently integrating the image energy across a number of Doppler bins centered on the target. The result is then normalized, to remove information about the overall cross-section from the profile, and range-aligned with other collected profiles by matching the profile centroids. Statistical models of the profiles are created as the targets are tracked, and newly-created profiles are compared against the existing models by computing the likelihood of the new profile given a particular model.
Moving Target High Resolution Imaging of Foliage Penetrate Ultra-Wide Band Synthetic Aperture Radar (FOPEN UWB SAR) is of great significance for battlefield awareness of concealed target. Great range migration and strong clutter make moving target detection and imaging difficult, especially the Signal to Clutter Ration(SCR) some times is so low that the moving targets is invisible in FOPEN UWB SAR imagery. To improve SCR, the clean technique is used in range compressed data domain. The clean technique and data reconstruction help single channel of FOPEN UWB SAR suppress strong tree clutter and stationary target signal from region of interest. A new definition called General Key-Stone Transform is given, which can correct any order of range migration. FOPEN UWB SAR has long integrated time. The plane and target moving in long time lead to complex range migration. To obtain high resolution imagery of moving target, General Key-Stone transform are applied to remove the range migration and realize multiple moving target data segment. Both General Key-Stone Transform and Clean Technique are applied in real data processing of FOPEN UWB SAR. The result shows that multiple moving targets in the trees are clearly detected and high resolution imagery is formed.
We have studied the robustness of features against aspect variability for the purpose of target discrimination using polarimetric 35 Ghz ISAR data. Images at a resolution of 10 cm and 30 cm have been used for a complete aspect range of 360 degrees. The data covered four military targets: T72, ZSU23/4, T62, and BMP2. For the study we composed several feature vectors out of individual features extracted from the images. The features are divided into three categories: radiometric, geometric and polarimetric. We found that individual features show a strong variability as a function of aspect angle and cannot be used to discriminate between the targets irrespectively of the aspect angle. Using feature vectors and a maximum likelihood classifier reasonable discrimination (about 80%) between the four targets irrespective of the aspect angle was obtained at 10 cm resolution. At 30 cm resolution less significant discrimination (less than 70%) was found irrespective of the kind of feature vector used. In addition we investigated target discrimination per 30-degree aspect interval. In order to determine the aspect angle of targets we used a technique based on the Radon transformation, which gave an accuracy of about 5 degrees in aspect angle. We found that in this case good discrimination (more than 90%) was obtained at 10 cm resolution and reasonable discrimination (about 80%) at 30 cm resolution. The results are compared with analogous results from MSTAR data (30 cm resolution) of comparable targets.
An ultra-wideband (UWB) synthetic aperture radar (SAR) simulation technique that employs physical and statistical models is developed and presented. This joint physics/statistics based technique generates images that have many of the "blob-like" and "spiky" clutter characteristics of UWB radar data in forested regions while avoiding the intensive computations required for the implementation of low-frequency numerical electromagnetic simulation techniques.
Approaches towards developing "self-training" algorithms for UWB radar target detection are investigated using the results of this simulation process. These adaptive approaches employ some form of modified singular value decomposition (SVD) algorithm where small blocks of data in the neighborhood of a sliding test window are processed in real-time in an effort to estimate localized clutter characteristics. These real-time local clutter models are then used to cancel clutter in the sliding test window. Comparative results from three SVD-based approaches to adaptive and "self-trained" target detection algorithms are reported. These approaches are denoted as "Energy-Normalized SVD", "Condition-Statistic SVD", and "Terrain-Filtered SVD". The results indicate that the "Terrain-Filtered SVD" approach, where a pre-filter is applied in an effort to eliminate severe clutter discretes that adversely effect performance, appears promising for the purposes of developing "self-training" algorithms for applications that may require localized "on-the-fly" training due to a lack of accurate off-line training data.
Classification of targets in high-resolution synthetic aperture radar imagery is a challenging problem in practice, due to extended operating conditions such as obscuration, articulation, varied configurations and a host of camouflage, concealment and deception tactics. Due to radar cross-section variability, the ability to discriminate between targets also varies greatly with target aspect. Potential space-borne and air-borne sensor systems may eventually be exploited to provide products to the warfighter at tactically relevant timelines. With such potential systems in place, multiple views of a given target area may be available to support targeting. In this paper, we examine the aspect dependence of SAR target classification and develop a Bayesian classification approach that exploits multiple incoherent views of a target. We further examine several practical issues in the design of such a classifier and consider sensitivities and their implications for sensor planning. Experimental results indicating the benefits of aspect diversity for improving performance under extended operating conditions are shown using publicly released 1-foot SAR data from DARPA's MSTAR program.
This paper describes a novel evolutionary method for automatic induction of target recognition procedures from examples. The learning process starts with training data containing SAR images with labeled targets and consists in coevolving the population of feature extraction agents that cooperate to build an appropriate representation of the input image. Features extracted by a team of cooperating agents are used to induce a machine learning classifier that is responsible for making the final decision of recognizing a target in a SAR image. Each agent (individual) contains feature extraction procedure encoded according to the principles of linear genetic programming (LGP). Like 'plain' genetic programming, in LGP an agent's genome encodes a program that is executed and tested on the set of training images during the fitness calculation. The program is a sequence of calls to the library of parameterized operations, including, but not limited to, global and local image processing operations, elementary feature extraction, and logic and arithmetic operations. Particular calls operate on working variables that enable the program to store intermediate results, and therefore design complex features. This paper contains detailed description of the learning and recognition methodology outlined here. In experimental part, we report and analyze the results obtained when testing the proposed approach for SAR target recognition using MSTAR database.
This paper focuses on a genetic algorithm based method that automates the construction of local feature based composite class models to capture the salient characteristics of configuration variants of vehicle targets in SAR imagery and increase the performance of SAR recognition systems. The recognition models are based on quasi-invariant local features: SAR scattering center locations and magnitudes. The approach uses an efficient SAR recognition system as an evaluation function to determine the fitness class models. Experimental results are given on the fitness of the composite models and the similarity of both the original training model configurations and the synthesized composite models to the test configurations. In addition, results are presented to show the SAR recognition variants of MSTAR vehicle targets.
The focus of this paper is a genetic algorithms based method
to automate the construction of local feature based composite class
models that capture the salient characteristics of configuration
variants of vehicle targets in SAR imagery and increase the
performance of SAR recognition systems. The recognition models are
based on quasi-invariant local features, SAR scattering center
locations and magnitudes. The approach uses an efficient SAR
recognition system as an evaluation function to determine the fitness
of candidate members of a genetic population of new models and
synthetically generates composite class models that are more similar
to existing configurations than those configurations are to each
other. Intuitively, specific features of models of versions A and B
of an object may not match, because they are outside of some
tolerance, while they may both match some synthetic version C that is
somewhere in the middle. Experimental recognition results are
presented in terms of receiver operating characteristic (ROC) curves
to show the improvements in SAR recognition performance utilizing
composite class models of configuration variants of MSTAR vehicle
This paper demonstrates an image matching methodology for application
in automatic target recognition systems. This method is based on chunking of an image and can be applied to any image matching system that uses templates to match against a given input image. Using information theoretical measures, templates are divided into sub-parts, called chunks. These chunks are scored individually against corresopnding parts of an input image. Sub-part scoring adds the ability to distinguish poorly matching areas of the target from those that match well. If a very small set of chunks score significantly worse than the other chunks then the poor-scoring chunks maybe discarded. This increases the scores of an input image that is of the same class but there is little or no effect on the score of an input image that is of another class.
Synthetic aperture radar (SAR) automatic target recognition (ATR) systems will not be effective and efficient without incorporating the user in acquiring and identifying a target. Typically, a SAR-ATR goal is to automatically identify a target for a user; however, in most cases, the data resolution and data availability is not accurate enough to identify the target over all operating conditions. Furthermore, when the target acquisition and recognition cycle is time-constrained, it is important for the SAR-ATR system to quickly present the target list, which the user can edit to reduce the target analysis time. In this paper, we explore user capabilities to assist in a time-sensitive target [TST] recognition task by understanding: (1) user needs, (2) SAR-ATR models and (3) simulation metrics for the SAR-ATR analysis. We utilize the User-Fusion Model, introduced by Blasch and Plano, to analyze the interaction between an image-based SAR-ATR analysis and user actions to facilitate a TST targeting task. Three metrics of throughput, timeliness, confidence, and accuracy are plotted in a novel 3D ROC curve for a given level of throughput to characterize a user-SAR-ATR (USA) model evaluation.
Ground surveillance and target recognition by radar has become increasingly important over the years. Modern digitally controlled radar systems have the ability to operate quasi simultaneously in two or more different modes, e.g. after detection of moving targets by MTI these target hypotheses are recorded by a high-resolution spotlight SAR. To classify the SAR signatures different techniques have been investigated. The objective of our work was to support the decision process in choosing the best combination of methods for the problem of ground target classification in high-resolution SAR images. The criteria of optimizing the classification are correctness (low false alarm rate (FAR)), robustness, and computational effort. The investigations have been carried out using the MSTAR public target dataset. In the paper we describe the examination of new classifier approaches like support vector machine (SVM) and relevance vector machine (RVM) in combination with superresolution methods like the CLEAN algorithm. For this purpose we have developed an experimental software system. Its processing chain consists of the following modules: preprocessing, feature extraction, and classification. The tests with the SVM have shown that without preprocessing too many support vectors (up to 50 %) are used. Therefore the RVM has been chosen to overcome this disadvantage. The preprocessing methods have been used to reduce the noise and to restore / extract the significant SAR signature. The result of our investigations is an assessment of the different methods and several method combinations. Based on these results the investigation will be extended by more realistic new datasets with a resolution as high as or higher than the MSTAR data.
Automatic or assisted target recognition (ATR) is an important application of synthetic aperture radar (SAR). Most ATR researchers have focused on the core problem of declaration-that is, detection and identification of targets of interest within a SAR image. For ATR declarations to be of maximum value to an image analyst, however, it is essential that each declaration be accompanied by a reliability estimate or confidence metric. Unfortunately, the need for a clear and informative confidence metric for ATR has generally been overlooked or ignored. We propose a framework and methodology for evaluating the confidence in an ATR system's declarations and competing target hypotheses. Our proposed confidence metric is intuitive, informative, and applicable to a broad class of ATRs. We demonstrate that seemingly similar ATRs may differ fundamentally in the ability-or inability-to identify targets with high confidence.
The ATR Workbench is an evaluation platform implemented to assist in
the development of automation techniques for target recognition within SAR imagery. This will allow researchers and Image Analysts (IAs) to investigate the capabilities of various commercial and experimental applications, singly or in combination, as applied to the target recognition process. The platform will enable studies to determine which aspects of the target recognition process improve IA performance when automated, which methods best improve classifier performance, as well as which methods work better for particular environments and target class definitions. Based largely on open-source tools, the Workbench was developed so as to provide a platform independent bridge between automatic target detection (ATD) applications and target classifiers. It is capable of importing several kinds of ATD reports, of applying different feature extraction and preprocessing algorithms and of implementing various aspects of automatic target recognition (ATR) applications while importing, displaying and reporting their results. Each step may be automated or operated interactively, as required. Initially, this capability is demonstrated on imagery based upon the public MSTAR data set.
Performance of automatic target recognition (ATR) systems depends on numerous factors including the mission description, operating conditions, sensor modality, and ATR algorithm itself. Performance prediction models sensitive to these factors could be applied to ATR algorithm design, mission planning, sensor resource management, and data collection design for algorithm verification. Ideally, such a model would return measures of performance (MOPs) such as probability of detection (Pd), correct classification (Pc), and false alarm (Pfa), all as a function of the relevant predictor variables. Here we discuss the challenges of model-based and data-based approaches to performance prediction, concentrating especially on the synthetic aperture radar (SAR) modality. Our principal conclusion for model-based performance models (predictive models derived from fundamental physics- and statistics-based considerations) is that analytical progress can be made for performance of ATR system components, but that performance prediction for an entire ATR system under realistic conditions will likely require the combined use of Monte Carlo
simulations, analytical development, and careful comparison to MOPs from real experiments. The latter are valuable for their high-fidelity, but have a limited range of applicability. Our principal conclusion for data-based performance models (that fit empirically derived MOPs) offer a potentially important means for extending the utility of empirical results. However, great care must be taken in their construction due to the necessarily sparse sampling of operating conditions, the high-dimensionality of the input
space, and the diverse character of the predictor variables. Also the applicability of such models for extrapolation is an open question.
Synthetic aperture radar (SAR) imagery is one of the most valuable sensor data sources for today's military battlefield surveillance and analysis. The collection of SAR images by various platforms (e.g. Global Hawk, NASA/JPL AIRSAR, etc.) and on various missions for multiple purposes (e.g. reconnaissance, terrain mapping, etc.) has resulted in vast amount of data over wide surveillance areas. The pixel-to-eye ratio is simply too high for human analysts to rapidly sift through massive volumes of sensor data and yield engagement decisions quickly and precisely. Effective automatic target recognition (ATR) algorithms to process this growing mountain of information are clearly needed. However, even after many years of research, SAR ATR still remains a highly challenging research problem. What makes SAR ATR problems difficult is the amount of variability exhibited in the SAR image signatures of targets and clutters. There are many different factors that can cause the variability in SAR image signatures. It is of convention to categorizes those factors into three major groups known as extended operating conditions (OC's) of target, environment and sensor. The group of sensor OC's includes SAR sensor parametric variations in depression angle, polarization, squint angle, frequencies (UHF, VHF, X band) and bandwidth, pulse repetition frequency (PRF), multi-look, antenna geometry and type, image formation algorithms, platform variations and geometric errors, noise level, etc. Many existing studies of SAR ATR have been traditionally focused on the variability of SAR signatures caused by a sub-space of target OC's and environment OC's. The similar studies in terms of SAR parametric variations in sensor OC's have been very limited due to the lack of data across the sensor OC's and the inherent difficulties as well as the high cost in supplying various sensor OC's during the data collections. This paper will present the results of a comprehensive survey of SAR ATR research works involving the subjects of various sensor OC's. We found out in the survey that, to this date, very few research has been devoted to the problems of sensor OC's and their effects over the performance of SAR image based ATR algorithms. Due to the importance of sensor OC's in the ATR applications, we have developed a research platform as well as important focus areas of future research in SAR parametric variations. A number of baseline ATR algorithms in the research platform have been implemented and verified. We have also planned and started SAR data simulation process across the spectrum of sensor OC's. A road-map for the future research of SAR parametric variations (sensor OC's) and their impact on ATR algorithms is laid out in this paper.
Performance assessment of automatic target detection and recognition algorithms for SAR systems (or indeed any other sensors) is essential if the military utility of the system / algorithm mix is to be quantified. This is a relatively straightforward task if extensive trials data from an existing system is used. However, a crucial requirement is to assess the potential performance of novel systems as a guide to procurement decisions. This task is no longer straightforward since a hypothetical system cannot provide experimental trials data. QinetiQ has previously developed a theoretical technique for classification algorithm performance assessment based on information theory. The purpose of the study presented here has been to validate this approach. To this end, experimental SAR imagery of targets has been collected using the QinetiQ Enhanced Surveillance Radar to allow algorithm performance assessments as a number of parameters are varied. In particular, performance comparisons can be made for (i) resolutions up to 0.1m, (ii) single channel versus polarimetric (iii) targets in the open versus targets in scrubland and (iv) use versus non-use of camouflage. The change in performance as these parameters are varied has been quantified from the experimental imagery whilst the information theoretic approach has been used to predict the expected variation of performance with parameter value. A comparison of these measured and predicted assessments has revealed the strengths and weaknesses of the theoretical technique as will be discussed in the paper.
We present the theoretical basis and a top level system design for estimating and predicting the uncertainty from single and multiple-look model-based automatic target recognition (ATR). Uncertainty estimation is used in decision making based on the probability of correct identification and the probability of a false alarm for a given ATR result. Uncertainty prediction provides a basis for asset management by establishing the value of additional looks at a target. A number of first principles theoretical models have been developed based on information theory and physics. These generally bound performance under idealized conditions. Our hypothesis test approach is designed to support operational uncertainty estimation and prediction based on statistics from parameterized models, simulations, and measurements. A significant challenge that we investigate is generating the probability density of the test statistic under the null hypothesis, which contains un-modeled types and natural clutter. Another challenge we address is establishing uncertainty under multiple look fusion.
Research conducted on complex Synthetic Aperture Radar (SAR) data over the last two years has culminated in the development of a compression algorithm1 compatible with current imagery standards. This new algorithm also includes adaptive attributes which identify the radar data type, data characteristics, and then selects optimal quantization parameters, generated based on the statistics of the data, from a knowledge base. This algorithm has achieved near-lossless compression ratios in excess of 20 to 1, with reduced Root Mean Square Error (RMSE) and increased Peak Signal to Noise Ratio (PSNR). This algorithm also produces minimal degradation when producing phase-derived radar products. This paper describes the algorithm development, operation, and test results obtained using this compression algorithm., The algorithm component elements are described including the use of an adaptive preprocessor, modified quantizer, and knowledge base. This paper details the improved results observed for compressed data, magnitude imagery, and phase-derived products generated during the study.