The global war on terror has plunged US and coalition forces into a battle space requiring the continuous adaptation of
tactics and technologies to cope with an elusive enemy. As a result, technologies that enhance the intelligence,
surveillance, and reconnaissance (ISR) mission making the warfighter more effective are experiencing increased interest.
In this paper we show how a new generation of smart cameras built around foveated sensing makes possible a powerful
ISR technique termed Cascaded ATR. Foveated sensing is an innovative optical concept in which a single aperture
captures two distinct fields of view. In Cascaded ATR, foveated sensing is used to provide a coarse resolution,
persistent surveillance, wide field of view (WFOV) detector to accomplish detection level perception. At the same time,
within the foveated sensor, these detection locations are passed as a cue to a steerable, high fidelity, narrow field of view
(NFOV) detector to perform recognition level perception. Two new ISR mission scenarios, utilizing Cascaded ATR, are
Critical to a large portion of mission scenarios within the intelligence, surveillance, and reconnaissance (ISR) sensor
community is the challenge to ensure designated targets of interest are reliably tracked in dynamic environments.
Current generation trackers frequently loose track when targets become temporarily obscured, shadowed, or is in close
proximity to other objects. In this paper we propose and demonstrate a generic confirmation of identity module that is
based on the Distance Classifier Correlation Filter (DCCF) and is applicable to a variety of tracking technologies. The
prevailing idea of this technique is that during a trackers valid track phase, learning exemplars are provided to a filter
building process and templates of the tracked targets are created real-time online. Differences in orientation are handled
through the creation of synthetic views using real target views and image warping techniques. After obscuration and/or
during periods of track ambiguity, each new candidate track is matched against the prior valid track(s) using DCCF
matching to resolve uncertainty.
Underwater mine identification persists as a critical technology pursued aggressively by the Navy for fleet protection.
As such, new and improved techniques must continue to be developed in order to provide measurable increases in mine
identification performance and noticeable reductions in false alarm rates. In this paper we show how recent advances in
the Volume Correlation Filter (VCF) developed for ground based LIDAR systems can be adapted to identify targets in
underwater LIDAR imagery. Current automated target recognition (ATR) algorithms for underwater mine identification
employ spatial based three-dimensional (3D) shape fitting of models to LIDAR data to identify common mine shapes
consisting of the box, cylinder, hemisphere, truncated cone, wedge, and annulus. VCFs provide a promising alternative
to these spatial techniques by correlating 3D models against the 3D rendered LIDAR data.
The impact of wavelet based compression on automatic target recognition (ATR) is investigated by applying wavelet compression to test scenes. The correlation algorithms known as maximum average correlation height (MACH) filter and the distance classifier correlation (DCCF) filter are used for ATR. The impact of compressing the correlation filters is also studied. The wavelet compression algorithm makes use of a progressive technique of embedded zerotree wavelet coding followed by adaptive arithmetic coding. Two target data sets are used for testing and training in this study. The first is composed of infrared (IR) images of a T72 tank and BMP armored personnel carrier. The second is a set of synthetic aperture radar (SAR) targets from the publicly released Moving and Stationary Target Acquisition and Recognition (MSTAR) database.