This study wishes to apply a non-parametric estimate of the Bayes Error Rate (BER) to current Air Force problems related to target classification. Whether they be neural networks, autoencoders, or other architectures, classifiers are commonly assessed through confusion matrices and associated statistics, or visualizations of feature spaces like t-SNE plots. However, these methods depend on the test data used to assess the performance of the network, not the robustness of the classifier itself. This research incorporates a different statistic that estimates the BER, or the probability of misclassification given some data, to serve as an upper bound for potential classifier performance. This estimate leverages a Friedman-Rafsky test statistic: the number of cross labels in a minimum-spanning tree (MST) through points in the feature space. The first part of this study examines the behavior of the BER estimate over a general learning process, such as different epochs of the training process in a neural network. The second part of the study examines whether certain factors affect the separability of synthetic aperture radar (SAR) images of targets of interest. Given the fact that it is often difficult and expensive to survey real targets and generate SAR images, 3-D CAD models are frequently used to generate synthetic SAR images. However, given that many resources are devoted to perfecting these models, this study applies the BER estimate to examine whether minute changes to CAD models affect separability in the image domain. The results seem to indicate that, if the topology of the target is maintained in the CAD domain, low-fidelity versions of targets (with 25% the number of faces of highly accurate models), exhibit separability and ability to be correctly classified identical to their high-fidelity counterparts. The BER estimation also shows promising applications in other domains, serving as a way to describe the underlying structure of feature spaces intuitively but effectively.
In this paper we present a methodology for validating a 3D Computer Aided Design (CAD) model's accuracy for radar data synthesis. CAD models have been used to generate computer simulated radar frequency (RF) data. One problem with existing CAD-based simulations is that there is no metric or tool to verify whether data produced from the CAD model can be classified correctly before and after modifications have been made. This paper presents a methodology to quantify the similarities and differences in data generated from CAD models before and after modifications and presents this information through confusion matrices and a visualization technique. Results for three experiments involving CAD model modifications are presented.
The publicly-available Moving and Stationary Target Acquisition and Recognition (MSTAR) synthetic aperture radar (SAR) dataset has been an valuable tool in the development of SAR automatic target recognition (ATR) algorithms over the past two decades, leading to the achievement of excellent target classification results. However, because of the large number of possible sensor parameters, target configurations and environmental conditions, the SAR operating condition (OC) space is vast. This leads to the impossible task of collecting sufficient measured data to cover the entire OC space. Thus, synthetic data must be generated to augment measured datasets. The study of synthetic data fidelity with respect to classification tasks is a non-trivial task. To that end, we introduce the Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset, which consists of SAR imagery from the MSTAR dataset and well-matched synthetic data. By matching target configurations and sensor parameters among the measured and synthetic data, the SAMPLE dataset is ideal for investigating the differences between measured and synthetic SAR imagery. In addition to the dataset, we propose four experimental designs challenging researchers to investigate the best ways to classify targets in measured SAR imagery given synthetic SAR training imagery.
Many computer-vision-related problems have successfully applied deep learning to improve the error rates with respect to classifying images. As opposed to optically based images, we have applied deep learning via a Siamese Neural Network (SNN) to classify synthetic aperture radar (SAR) images. This application of Automatic Target Recognition (ATR) utilizes an SNN made up of twin AlexNet-based Convolutional Neural Networks (CNNs). Using the processing power of GPUs, we trained the SNN with combinations of synthetic images on one twin and Moving and Stationary Target Automatic Recognition (MSTAR) measured images on a second twin. We trained the SNN with three target types (T-72, BMP2, and BTR-70) and have used a representative, synthetic model from each target to classify new SAR images. Even with a relatively small quantity of data (with respect to machine learning), we found that the SNN performed comparably to a CNN and had faster convergence. The results of processing showed the T-72s to be the easiest to identify, whereas the network sometimes mixed up the BMP2s and the BTR-70s. In addition we also incorporated two additional targets (M1 and M35) into the validation set. Without as much training (for example, one additional epoch) the SNN did not produce the same results as if all five targets had been trained over all the epochs. Nevertheless, an SNN represents a novel and beneficial approach to SAR ATR.
When simulating multisensor signature data (including SAR, LIDAR, EO, IR, etc...), geometry data are required that accurately represent the target. Most vehicular targets can, in real life, exist in many possible configurations. Examples of these configurations might include a rotated turret, an open door, a missing roof rack, or a seat made of metal or wood. Previously we have used the Modelman (.mmp) format and tool to represent and manipulate our articulable models. Unfortunately Modelman is now an unsupported tool and an undocumented binary format. Some work has been done to reverse engineer a reader in Matlab so that the format could continue to be useful. This work was tedious and resulted in an incomplete conversion. In addition, the resulting articulable models could not be altered and re-saved in the Modelman format. The AFacet (.afacet) articulable facet file format is a replacement for the binary Modelman (.mmp) file format. There is a one-time straight forward path for conversion from Modelman to the AFacet format. It is a simple ASCII, comma separated, self-documenting format that is easily readable (and in many cases usefully editable) by a human with any text editor, preventing future obsolescence. In addition, because the format is simple, it is relatively easy for even the most novice programmer to create a program to read and write AFacet files in any language without any special libraries. This paper presents the AFacet format, as well as a suite of tools for creating, articulating, manipulating, viewing, and converting the 370+ (when this paper was written) models that have been converted to the AFacet format.