For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.
Advanced correlation filters (CFs) were introduced over three decades ago to offer distortion-tolerant object recognition and are used in applications such as automatic target recognition (ATR) and biometric recognition. Some of the advances in CF design include minimum average correlation energy (MACE) filters that produce sharp correlations and offer excellent discrimination, optimal tradeoff synthetic discriminant function (OTSDF) filters that allow the filter designer to control the tradeoff between peak sharpness and noise tolerance, maximum average correlation height (MACH) filter that removes correlation peak constraints to reduce filter design complexity and quadratic correlation filters (QCFs) that extend the linear CFs to include second-order nonlinearity. In this paper, we summarize two recent major advances in CF design. First is the introduction of maximum margin correlation filters (MMCFs) that combine the excellent localization properties of CFs with the very good generalization abilities of support vector machines (SVMs). Second is the introduction of zero-aliasing correlation filters (ZACFs) that eliminate the aliasing in CF design due to the circular correlation caused by the use of discrete Fourier transforms (DFTs).
Discrete Fourier transforms (DFTs) are typically used to compute correlations and implementing correlation filters (CFs). Because of the properties of DFTs, resulting correlations are actually circular (also known as periodic) correlations. Using current CF design techniques, it is not possible to design a CF that produces exactly the desired linear correlation output. There are several techniques that may be used to reduce the effects of circular correlation. In this paper, we describe these techniques and provide some experimental results that compare these techniques.<p> </p>This work is sponsored by the Air Force Research Laboratory (AFRL). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing offcial policies, either expressed or implied, of AFRL, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. This document is approved for public released via PA#: 88ABW-2013-1359.
We present two generalized linear correlation filters (CFs) that encompass most of the state-of-the-art linear CFs. The common criteria that arc used in linear CF design are the mean squared error (MSE), output noise variance (ONV), and average similarity measure (ASM). We present a simple formulation that uses an optimal tradeoff among these criteria both constraining and not constraining the correlation peak value, and refer to them as generalized Constrained Correlation Filter (CCF) and Unconstrained Couelation Filter (UCF). We show that most state-of-the-art linear CFs arc subsets of these filters. We present a technique for efficient UCF computation. We also introduce the modified CCF (mCCF) that chooses a unique correlation peak value for each training image, and show that mCCF usually outperforms both UCF and CCF.
Correlation filters have been shown to perform well for localization and classification tasks. In this paper, we investigate different techniques used to pre- and post-process the data to improve the correlation filter performance. In addition, we present an efficient method to use zero-mean, unit-norm test chips in a large test image. We compare the localization, classification, and recognition performance when we apply one or more of these methods.
Iris recognition is a well-known technique to identify persons. However this technique requires high resolution
images in order to automatically segment the iris. In some scenarios obtaining the required resolution may
be difficult. In this paper, we investigate the recognition of ocular regions using correlation filters without
segmenting the iris region. This method uses the whole eye region and surrounding areas, i.e., the ocular region,
for identification. In our experiments we use the recently developed Quadratic Correlation Filter and show that
at low resolutions segmentation-free ocular recognition can succeed while iris segmentation fails.
Correlation filters (CFs) can detect multiple targets in one scene making them well-suited for automatic target
recognition (ATR) applications. Quadratic CFs (QCFs) can improve performance over linear CFs. QCFs are able
to detect one class of targets and reject clutter. We present a method to increase the QCF capabilities to detect
two classes of targets and reject clutter. We integrate the ATR tasks of detection, recognition, and tracking
algorithms using the Multi-Frame Correlation Filter (MFCF) framework. Our simulation results demonstrate
the algorithm's ability to detect multiple targets from two classes while rejecting clutter.
Correlation filters (CF) have been widely used for detecting and recognizing patterns in 2-D images. These
filters are designed to yield sharp correlation peaks for desired objects while exhibiting low response to clutter
and background. CFs are designed using training images that resemble the object of interest. However it is not
clear what should be the background of these training images. Some methods use a white background while
others use the mean value of the target region. It is important to determine an appropriate background since
a mismatched background may cause the filter to discriminate based on the background rather than the target
pattern. In this paper we discuss a method to choose training images, and we compare the effects of different
backgrounds on the filter performance in different scenarios using both synthetic (pixels in the background
chosen from a Gaussian distribution) and real backgrounds (photographs of different sceneries) for testing. In
our comparisons we do not restrict ourselves to using a background with constant pixel intensity for training but
also include in the training images backgrounds with varying pixel intensity with mean and standard deviation
equal to the mean and standard deviation of the target region. Experiments show that without a prior knowledge
of the background in the testing images, training the filters using a background with the mean and variance of
all the desired objects tends to give better results.
Data acquisition with multiple sensors requires accurate registration in both time and space for effective data
fusion. This paper presents a system that permits synchronization for GPS and video, but it can be expanded
to include other sensors (i.e. infrared, SAR, etc). We begin with a discussion on the using a pulse-per-second
signal for synchronization. We then describe the workings of the Global Positioning System. We compare
dfferent autopilots, cameras, frame captures, processors, operating systems, and data storages that can be
used for the system, and provide our hypothesis for device selection. We introduce the overall idea of video
compression, brieify summarize the different methods, explain our choice of MPEG-2, describe the metadata
format, compare the choices of encoders, and explain the MPEG transport stream. We present the benefits of
using certain transmission frequencies and the legal restrictions, and give our frequency choices based on available
transmitters and receivers. We finish with a summary of the entire system.
Advancements in portability and performance are described for a fiber optic sensor readout system capable of
monitoring wavelength-multiplexed sensors. The handheld sensor interrogator was designed to readily interface with
conglomerate sensor systems as a smart sensor node and process all spectral data within its own system in real time at 20
Hz for +/- 13 picometers resolution mode. Portability was demonstrated by flying the system on a miniature aerial
vehicle (MAV) which collected strain and temperature flight data for broadcast to a ground station. Additional
improvements upgraded the sensor measurement speed by two orders of magnitude.