The nature of hyperspectral exploitation systems is such that a set of spectral imagery - and possibly a priori information
such as a supplied library of target spectral signatures - is ingested into an algorithm and a series of responses is output.
These responses must be scored for their accuracy against known target locations in the image set, from which algorithm
performance is then determined. We propose, implement, and demonstrate a new environment for visualizing this
process, which will aid not only the evaluator but also the algorithm developer in better understanding, characterizing,
and improving system performance, be it that of an anomaly detection, change detection, or material identification
The Defense Advanced Research Projects Agency (DARPA) Video Verification of Identity (VIVID) program has
as its goal the development of the best video tracker ever. This goal is reached through a philosophy of on-the-fly
target modeling and the use of three distinct modules: a multiple-target tracker, a confirmatory identification
module, and a collateral damage avoidance/moving target detection module. Over the two years of VIVID
Phase I, progress appraisal of the ATR-like confirmatory identification module was provided to DARPA by the
Air Force Research Laboratory Comprehensive Performance Assessment of Sensor Exploitation (COMPASE)
Center through regular evaluations. This document begins with an overview of the VIVID system and its
approach to solving the multiple-target tracking problem. A survey of the data collected under VIVID auspices
and their use in the evaluation are then described, along with the operating conditions relevant to confirmatory
identification. Finally, the evaluation structure is presented in detail, including metrics, experiment design,
experiment construction techniques, and support tools.
There is a need for persistent-surveillance assets to capture high-resolution, three-dimensional data for use in assisted target recognizing systems. Passive electro-optic imaging systems are presently limited by their ability to provide only 2-D measurements. We describe a methodology and system that uses existing technology to obtain 3-D information from disparate 2-D observations. This data can then be used to locate and classify objects under obscurations and noise. We propose a novel methodology for 3-D object reconstruction through use of established confocal microscopy techniques. A moving airborne sensing platform captures a sequence of geo-referenced, electro-optic images. Confocal processing of this data can synthesize a large virtual lens with an extremely sharp (small) depth of focus, thus yielding a highly discriminating 3-D data collection capability based on 2-D imagery. This allows existing assets to be used to
obtain high-quality 3-D data (due to the fine z-resolution). This paper presents a stochastic algorithm for reconstruction of a 3-D target from a sequence of affine projections. We iteratively gather 2-D images over a known path, detect target edges, and aggregate the edges in 3-D space. In the final step, an expectation is computed resulting in an estimate of the target structure.