Existing techniques for hyperspectral image (HSI) anomaly detection are computationally intensive precluding real-time
implementation. The high dimensionality of the spatial/spectral hypercube with associated correlations between spectral
bands present significant impediments to real time full hypercube processing that accurately encapsulates the underlying
modeling. Traditional techniques have imposed Gaussian models, but these have suffered from significant
computational requirements to compute large inverse covariance matrices as well as modeling inaccuracies. We have
developed a novel data-driven, non-parametric HSI anomaly detector that has significantly reduced computational
complexity with enhanced HSI modeling, providing the capability for real time performance with detection rates that
match or surpass existing approaches. This detector, based on the Support Vector Data Description (SVDD), provides
accurate, automated modeling of multi-modal data, facilitating effective application of a global background estimation
technique which provides the capability for real time operation on a standard PC platform. We have demonstrated one
second processing time on hypercubes of dimension 256×256×145, along with superior detection performance relative to
alternate detectors. Computation performance analysis has been quantified via processing runtimes on a PC platform,
and detection/false-alarm performance is described via Region Operating Characteristic (ROC) curve analysis for the
SVDD anomaly detector vs. alternate anomaly detectors.
The goal of the DARPA Video Verification of Identity (VIVID) program is to develop an automated video-based ground targeting system for unmanned aerial vehicles. The system comprises several modules that interact with each other to support tracking of multiple targets, confirmatory identification, and collateral damage avoidance. The Multiple Target Tracking (MTT) module automatically adjusts the camera pan, tilt, and zoom to support kinematic tracking, multi-target track association, and confirmatory identification. The MTT system comprises: (i) a video processor that performs moving object detection and feature extraction, including object position and velocity, (ii) a multiple hypothesis tracker that processes video processor reports to generate and maintain tracks, and (iii) a sensor resource manager that aims the sensor to improve tracking of multiple targets. This paper presents a performance assessment of the current implementation of the MTT under several operating conditions. The evaluation is done using pre-recorded airborne video to assess the ability of the video tracker to detect and track ground moving objects over extended periods of time. The tests comprise a number of different operational conditions such as multiple targets and confusers under various levels of occlusion and target maneuverability, as well as different background conditions. The paper also describes the challenges that still need to be overcome to extend track life over long periods of time.
The goal of the DARPA Video Verification of Identity (VIVID) program is to develop an automated video-based ground targeting system for unmanned aerial vehicles that significantly improves operator combat efficiency and effectiveness while minimizing collateral damage. One of the key components of VIVID is the Multiple Target Tracker (MTT), whose main function is to track many ground targets simultaneously by slewing the video sensor from target to target and zooming in and out as necessary. The MTT comprises three modules: (i) a video processor that performs moving object detection, feature extraction, and site modeling; (ii) a multiple hypothesis tracker that processes extracted video reports (e.g. positions, velocities, features) to generate tracks of currently and previously moving targets and confusers; and (iii) a sensor resource manager that schedules camera pan, tilt, and zoom to support kinematic tracking, multiple target track association, scene context modeling, confirmatory identification, and collateral damage avoidance. When complete, VIVID MTT will enable precision tracking of the maximum number of targets permitted by sensor capabilities and by target behavior. This paper describes many of the challenges faced by the developers of the VIVID MTT component, and the solutions that are currently being implemented.
The ability to strike moving targets with precision without putting friendly forces at risk remains an elusive goal. Unlike fixed targets, the engagement of moving vehicles requires target recognition in real-time. While automatic target recognition techniques have been pursued with vigor for more than 30 years, ATR is neither necessary nor sufficient to address this need. By taking a broader view of the problem of precision identification, we identify some promising research themes.
Today's radar exploitation system utilize information from both Ground Moving Target Indication (GMTI) and Synthetic Aperture Radar (SAR) obtained from various airborne platforms. GMTI detects and supports the classification of moving targets, whereas SAR detects and supports the classification of stationary targets. However, there is currently no ability to integrate the information from these two classes of radars in tracking targets that execute sequences of move-stop-move maneuvers. The solutions of this dilemma is the development of a Continuous Tracking (CT) architecture that uses distinctive GMTI and SAR features to associate stationary and moving target detections through move-stop-move maneuvers. This paper develops a theoretical model and present corresponding numeric computations of the performance of the CT syste. This theory utilizes a two- state Markov process to model the successive SAR and MTI detections are derived from typical traffic and sensor behaviors. This analysis of the sensor characteristics and the underlying traffic model provides a foundation in designing a CT systems with the maximum possible performance.
ARPA is currently sponsoring five institutions to perform research related to RADIUS. The efforts are primarily addressing the problems of semi- and fully automatic site model construction and change detection. Brief descriptions of the work at each institution are presented.
Photogrammetry and computer vision are closely related disciplines with differing goals. This paper explores the ramifications of those differences with an eye toward identifying opportunities for leveraging the contributions of both fields. We identify some of the thorny issues that computer vision researchers are wrestling with and point out some of the areas where advances in photogrammetry can help. In most cases, further progress will require a shift from the conventional techniques of photogrammetry to ones that are more compatible with the ground-level, real time, and fully automated constraints that are emphasized in computer vision.
RCDE is a software environment for the development of image understanding algorithms. The application focus of RCDE is on image exploitation where the exploitation tasks are supported by 2D and 3D models of the geographic site being analyzed. An initial prototype for RCDE is SRI's Cartographic Modeling Environment (CME). This paper reviews the CME design and illustrates the application of CME to site modeling scenarios.