A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.
Image change detection has long been used to detect significant events in aerial imagery, such as the arrival or departure
of vehicles. Usually only the underlying structural changes are of interest, particularly for movable objects, and the
challenge is to differentiate the changes of intelligence value (change detections) from incidental appearance changes (false
detections). However, existing methods for automated change detection continue to be challenged by nuisance variations in
operating conditions such as sensor (camera exposure, camera viewpoints), targets (occlusions, type), and the environment
(illumination, shadows, weather, seasons). To overcome these problems, we propose a novel vehicle change detection
method based on the detection response maps (DRM). The detector serves as an advanced filter that normalizes the images
being compared specifically for object level change detection (OLCD). In contrast to current methods that compare pixel
intensities, the proposed DRM-OLCD method is more robust to nuisance changes and variations in image appearance. We
demonstrate object-level change detection for vehicle appearing and disappearing in electro-optical (EO) visual imagery.
We present a 3D change detection framework designed to support various applications in changing environmental conditions. Previous efforts have focused on image filtering techniques that manipulate the intensity values of the image to create a more controlled and unnatural illumination. Since most applications require detecting changes in a scene irrespective of the time of day and present lighting conditions, image filtering algorithms fail to suppress the illumination differences enough for Background Model (BM) subtraction to be effective. Our approach completely eliminates the illumination challenges from the change detection problem. The algorithm is based on our previous work in which we have shown a capability to reconstruct a surrounding environment in near real-time processing speeds. The algorithm, namely Dense Point-Cloud Representation (DPR), allows for a 3D reconstruction of a scene using only a single moving camera. In order to eliminate any effects of the illumination change, we convert each point-cloud model into a 3D binary voxel grid. A `1' is assigned to voxels containing points from the model while a `0' is assigned to voxels with no points. We detect the changes between the two environments by volumetrically subtracting the registered 3D binary voxel models. This process is extremely computationally efficient due to logic-based operations available when handling binary models. We evaluate the 3D change detection framework by experimenting on the same scene with aerial imagery captured at various times.
Automated target tracking with wide area motion imagery (WAMI) presents significant challenges due to the low resolution, low framerate data provided by the sensing platform. This paper discusses many of these challenges with a focus on the use of features to aid the tracking process. Results illustrate the potential benefits obtained when combining target kinematic and feature data, but also demonstrate the difficulties encountered when tracking low contrast targets, targets that have appearance models similar to their background and under conditions where traffic density is relatively high. Other difficulties include target occlusion and move-stop-move events, which are mitigated with a new composite detection method that seamlessly integrates feature and kinematic data. A real WAMI dataset was used in this study, and specific vignettes will be discussed. A single target tracker is implemented to demonstrate the concepts and provide results.
Target ambiguity is a major problem in dense urban tracking environments with closely spaced targets. Target
classification, action recognition, and 3D feature-aiding can be used to resolve this ambiguity in situations where
traditional 2D feature-aiding techniques alone are ineffective. Knowledge of target location, track state, and
sensor orientation can be coupled with these techniques to improve accuracy and tracking performance even
further. A combination of synthetic and real data is used to demonstrate these concepts.
An architecture and implementation is presented regarding persistent, hyperspectral, adaptive, multi-modal,
feature-aided tracking within the urban context. A novel remote-sensing imager has been designed which employs
a micro-mirror array at the focal plane for per-pixel adaptation. A suite of end-to-end synthetic experiments have
been conducted, which include high-fidelity moving-target urban vignettes, DIRSIG hyperspectral rendering, and
full image-chain treatment of the prototype adaptive sensor. Corresponding algorithm development has focused
on: motion segmentation, spectral feature modeling, classification, fused kinematic/spectral association, and
adaptive sensor feedback/control.
A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks
moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to
reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging
the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example
sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical
and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity,
hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven
algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track
moving vehicle targets.
Hyperspectral imagery (HSI) data has proven useful for discriminating targets, however the relatively slow speed at
which HSI data is gathered for an entire frame reduces the usefulness of fusing this information with grayscale video. A
new sensor under development has the ability to provide HSI data for a limited number of pixels while providing
grayscale video for the remainder of the pixels. The HSI data is co-registered with the grayscale video and is available
for each frame. This paper explores the exploitation of this new sensor for target tracking. The primary challenge of
exploiting this new sensor is to determine where the gathering of HSI data will be the most useful. We wish to optimize
the selection of pixels for which we will gather HSI data. We refer to this as spatial sampling. It is proposed that
spatial sampling be solved using a utility function where pixels receive a value based on their nearness to a target of
interest (TOI). The TOIs are determined from the tracking algorithm providing a close coupling of the tracking and the
sensor control. The relative importance or weighting of the different types of TOI will be accomplished by a genetic
algorithm. Tracking performance of the spatially sampled tracker is compared to both tracking with no HSI data and
although physically unrealizable, tracking with complete HSI data to demonstrate its effectiveness within the upper and
Hyperspectral images provide scientists and engineers with the capability of precise material identification in
remote sensing applications. One can leverage this data for precise track identification (ID) and incorporate the
high-confidence ID in the tracking process. Our previous work demonstrates that hyperspectral-aided tracking
outperforms kinematic-only tracking where multiple ambiguous situations exist. We develop a novel gating concept
for hyperspectral measurements, similar in concept to the gating of the Mahalanobis distance computed
from the Kalman residuals. Our spectral gating definition is based on the distance between the spectral distribution
of the class ID of a track and the spectral distribution of the class ID resulting from the classification
of a measurement. We further incorporate the distance between each class distribution (in spectral space) in
the track association portion of our hyperspectral-aided tracker. Since functional forms of the joint probability
distribution function do not exist, similarity measures such as the Kullback-Leibler divergence or Bhattacharyya
distance cannot be used. Instead, we compute all pair-wise distances between all samples of the two classes and
then summarize these distances in a meaningful way. This article presents our novel spectral gating approach
and its use in track association. It further explores different similarity measures and their effect on spectral
gating and track association.
The various asymmetrical threats in the urban environment have driven the need for persistent surveillance
and methods to exploit the data provided by passive sensing platforms. The primary goal is to track vehicles
as they move through the urban environment. The rather large number of ambiguous tracking events requires
incorporation of target features to maintain track purity. This paper will discuss a feature extraction technique
that will be referred to as "feature-aided" tracking to mitigate some of the tracking issues in this environment (e.g.
rotation and illumination invariance, partial occlusion, and
move-stop-move transitions). The feature extraction
method applied is loosely based on the SPIN histogram method of applying a two-dimensional histogram relative
to the center of an object. This paper focuses on applying a simplified version of the intensity-based two-dimensional
histogram and gradient-based two-dimensional histogram introduced by the works of Mikolajczyk
and Schmid, and Lazebnik, Schmid, and Ponce. Instead of applying the matching technique on a still frame
subjected to various image transformations, we will apply this technique to sequential frames of imagery in an
urban environment. This approach is intended to be the first of several steps towards eventually integrating a
feature-aided tracking option as one of multiple sources of measurement association. The preliminary results
show potential signs of success especially with rotation-invariance and move-stop-move transitions; however,
additional efforts are required associated with illumination invariance, partial occlusion and disambiguation of
close proximity objects.
This research investigates the impact of scene context knowledge on tracking vehicles in an urban environment
based on video image change detection. The scene context consists of knowledge of the road network and
3D building properties. Airborne sensor position information relative to a 3D model of the context enables
calculation of building occlusions of ground locations. From this context, probability of detection maps that
include regions of interest and smoothed lines-of-sight are developed that assist the change detection algorithm
in reducing false alarms.
A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large
UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are
widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research
efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section
when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter
objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support
this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an
acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with
preliminary results from three of the sensors.
Target tracking in an urban environment presents a wealth of ambiguous tracking scenarios that cause a kinematic-only tracker to fail. Partial or full occlusions in areas of tall buildings are particularly problematic as there is often no way to correctly identify the target with only kinematic information. Feature aided tracking attempts to resolve problems with a kinematic-only tracker by extracting features from the data. In the case of panchromatic video, the features are often histograms, the same is true for color video data. In the case where tracks are uniquely different colors, more typical feature aided trackers may perform well. However, a typical urban setting has similar size, shape, and color tracks, and
more typical feature aided trackers have no hopes in resolving many of the ambiguities we face. We present a novel feature aided tracking algorithm combining two-sensor modes: panchromatic video data and hyperspectral imagery. The hyperspectral data is used to provide a unique fingerprint for each target of interest where that fingerprint is the set of features used in our feature aided tracker. Results indicate an impressive 19% gain in correct track ID with our
hyperspectral feature aided tracker compared to the baseline performance with a kinematic-only tracker.
Video tracking is used in military operations and homeland defense. Multiple cameras are mounted on an airplane that flies in a circle and points to a central location. The images are pre-registered and a single large image is sent to a ground station at the rate of a frame per second. The first step needed for tracking is measurements. The video undergoes additional registration and processing to produce multi-frame motion detections. These measurements are passed to the tracking algorithm. Tracking through an urban environment has its own unique challenges. Targets frequently cross paths, go behind one another, and go behind buildings or into shadowed areas. Additional challenges include Move-Stop-Move, parallax, and track association with highly similar targets. These challenges need to be overcome with up to a thousand vehicles, so processing speed is crucial. The project is Open-Source to aid in overcoming these technical challenges. Alternative trackers (IMM, MHT), features, association methods, track-initiation and deletion (M/N or LU), state variables, or other specialized routines (for Move-Stop-Move, parallax, etc.) will be tried and analyzed with representative data. By keeping it Open-Source, any ideas to improve the system can be easily implemented and analyzed. This paper presents current findings and state of the project.
The ability of many insects, especially moths, to locate either food or a member of the opposite sex is an
amazing achievement. There are numerous scenarios where having this ability embedded into ground-based
or aerial vehicles would be invaluable. This paper presents results from a 3-D computer simulation of an
Unmanned Aerial Vehicle (UAV) autonomously tracking a chemical plume to its source. The simulation study
includes a simulated dynamic chemical plume, 6-degree of freedom, nonlinear aircraft model, and a bio-inspired
navigation algorithm. The emphasis of this paper is the development and analysis of the navigation algorithm.
The foundation of this algorithm is a fuzzy controller designed to categorize where in the plume the aircraft is
located: coming into the plume, in the plume, exiting the plume, or out of the plume.
An advent in the development of smart munitions entails autonomously modifying target selection during flight in order to maximize the value of the target being destroyed. A unique guidance law can be constructed that exploits both attribute and kinematic data obtained from an onboard video sensor. An optimal path planning algorithm has been developed with the goals of obstacle avoidance and maximizing the value of the target impacted by the munition. Target identification and classification provides a basis for target value which is used in conjunction with multi-target tracks to determine an optimal waypoint for the munition. A dynamically feasible trajectory is computed to provide constraints on the waypoint selection. Results demonstrate the ability of the autonomous system to avoid moving obstacles and revise target selection in flight.
The need to track closely-spaced targets in clutter is essential in support of military operations. This paper presents a Multiple Hypothesis Tracking (MHT) algorithm which uses an efficient structure to represent the dependency which naturally arises between targets due to the joint observation process, and an Integral Square Error (ISE) mixture reduction algorithm for hypothesis control. The resulting algorithm, denoted MHT with ISE Reduction (MISER), is tested against performance metrics including track life, coalescence and track swap. The results demonstrate track life performance similar to that of ISE-based methods in the single-target case, and a significant improvement in track swap metric due to the preservation of correlation between targets. The result that correlation reduces the track life performance for formation targets requires further investigation, although it appears to demonstrate that the inherent coupling of dynamics noises for such problems eliminates much of the benefit of representing correlation only due to the joint observation process.