We present a novel method to quickly detect and track objects of low resolution within an image frame by comparing
dense, oriented gradient features at multiple scales within an object chip. The proposed method uses vector correlation
between sets of oriented Haar filter responses from within a local window and an object library to create similarity
measures, where peaks indicate high object probability. Interest points are chosen based on object shape and size so that
each point represents both a distinct spatial location and the shape segment of the object. Each interest point is then
independently searched in subsequent frames, where multiple similarity maps are fused to create a single object
probability map. This method executes in real time by reducing feature calculations and approximations using box
filters and integral images. We achieve invariance to rotation and illumination, because we calculate interest point
orientation and normalize the feature vector scale. The method creates a feature set from a small and localized area,
allowing for accurate detections in low resolution scenarios. This approach can also be extended to include the detection
of partially occluded objects through calculating individual interest point feature vector correlations and clustering points
together. We have tested the method on a subset of the Columbus Large Image Format (CLIF) 2007 dataset, which
provides various low-pixel-on-object moving and stationary vehicles with varying operating conditions. This method
provides accurate results with minimal parameter tuning for robust implementation on aerial, low pixel-on-object data
sets for automated classification applications.
Optical flow-based tracking methods offer the promise of precise, accurate, and reliable analysis of motion, but they
suffer from several challenges such as elimination of background movement, estimation of flow velocity, and optimal
feature selection. Wavelet approximations can offer similar benefits and retain spatial information at coarser scales,
while optical flow estimation increases with the reduction of finer details of moving objects. Optical flow methods often
suffer from significant computational overload. In this study, we have investigated the necessary processing steps to
increase detection and estimation accuracy, while effectively reducing computation time through the reduction of the
image frame size. We have implemented an object tracking algorithm using the optical flow calculated from a phase
change between representative coarse wavelet coefficients in subsequent image frames. We have also compared phasebased
optical flow with two versions of intensity-based optical flow to determine which method produces superior
results under specific operational conditions. The investigation demonstrates the feasibility of using phase-based optical
flow with wavelet approximations for object detection and tracking of low resolution aerial vehicles. We also
demonstrate that this method can work in tandem with feature-based tracking methods to increase tracking accuracy.
Automated classification and tracking approaches suffer from the high-dimensionality of the data and information space,
which frequently rely upon both discriminative feature selection and efficient, accurate supervised classification
strategies. Feature selection strategies have the benefit of representing the data in a modified reduced space to improve
the efficacy of data mining, machine learning, and computer vision approaches. We have developed feature-selection
methods involving feature ranking and assimilation to discover reduced feature sets that produce accurate results in
classification for automated classifiers with significant specificity and sensitivity. We have tested a wide range of spatial,
texture, and wavelet-based feature sets for electro-optical (EO) aerial imagery and infrared (IR) land-based image
sequences on several machine-learning algorithms for classification for performance evaluation and comparison. A
detailed experimental evaluation is provided for the classification efficacy of the features and classifiers on the particular
data sets, and is accompanied by a discussion of the particular success or failure. In the second section, we detail our
novel feature set that combines moment and edge descriptors and produces high, robust accuracy when evaluated for
classification. Our method leverages information previously calculated in the detection stage, which includes wavelet
decomposition and texture statistics. We demonstrate the results of our feature set implementation and discuss methods
for creating classifier decision rules to choose a particular classification algorithm dependent on certain operating
conditions or data types adaptively.
We present a novel implementation of multi-scale graph-theoretic image segmentation using wavelet decomposition.
This bottom-up segmentation through a weighted agglomeration approach utilizes the specific statistical characteristics
of vehicles to quickly detect regions of interest in image frames. The method incorporates pixel intensity, texture, and
boundary values to detect salient segments at multiple scales. Wavelet decomposition creates gradient and image
approximations at multiple scales for fast edge weighting between nodes in the graph. Nodes with strong edge weights
merge to form a single node at a higher level, where new internal statistics are calculated and edges are created with
nodes at the new scale. Top-down saliency energy values are then calculated for each pixel on every scale, with the pixel
labeled as a member of the node (segment) at the scale of highest energy. Salient node information is then used for
binary classification as a potential object or non-object passes to classification and tracking algorithms. The method
provides multi-scale segmentations by agglomerating nodes that consist of finer node agglomerations (lower scales).
Criteria for weights between nodes include multi-level features, such as average intensity, variance, and boundary
completion values. This method has been successfully tested on an electro-optical (EO) data set with multiple varying
operating conditions (OCs). It has been shown to successfully segment both fully and partially occluded objects with
minimal false alarms and false negatives. This method can easily be extended to produce more accurate segmentations
through the sensor fusion of registered data types.
Multispectral sensors produce images with a few relatively broad wavelength bands. Hyperspectral remote sensors, on the other hand, collect image data simultaneously in dozens or hundreds of narrow and adjacent spectral bands. These measurements make it possible to derive a continuous spectrum for each image cell, generating an image cube across multiple spectral components. Hyperspectral imaging has sound applications in a variety of areas such as mineral exploration, hazardous waste remediation, mapping habitat, invasive vegetation, eco system monitoring, hazardous gas detection, mineral detection, soil degradation, and climate change. This image has a strong potential for transforming the imaging paradigms associated with several design and manufacturing processes. In this paper, we describe a novel approach for fast indexing of multi-dimensional hyperspectral image data, especially for data mining applications. The index exploits the spectral and spatial relationships embedded in these image sets. The index will be employed for knowledge retrieval applications that require fast information interpretation approaches. The index can also be deployed in real-time mission-critical domains, as it is shown to exhibit speed with high degrees of dimensionality associated with the data. The strength of this index in terms of degree of false dismissals and false alarms will also be demonstrated. The paper will highlight some common applications of this imaging computational paradigm and will conclude with directions for future improvement and investigation.
The problem of heterogeneous data mining deals with the computational challenges of searching multimedia data in a unified computational framework that can answer similarity queries of data mining by accurate and efficient means. The advances in data collection methodologies have generated large data-warehouses, in assortment of application domains, including but not limited to, Internet applications for multimedia retrieval and exchange. Heterogeneous data indexing has proven to be a valuable tool for complex data mining in large data domains inherently semi-structured in nature. We propose a solution to integrate the feature vectors of image and text by cooperatively representing them in a multidimensional spatial data structure, which has previously exhibited superior search performance in image database domains. We have evaluated results of content-based similarity queries on the indexing schema independently in images and textual domains. We have then studied and represented the effect of the choice of similarity metric on the similarity queries. We then propose an indexing schema that integrates the feature vectors of text and images to answer integrated queries on the unified heterogeneous data space. An added advantage of the proposed methodology is embodied by the fact that a textual feature vector can query a heterogeneous database to retrieve both text as well as images as query results. This solves the problem of individually querying each data-domain separately and sequentially scanning the integrated database for similarity results. The proposed methodology is time and space efficient, and is capable of answering complex heterogeneous data mining queries in multimedia domains.
Effective sensor placement methodologies are desired for distributed sensor networks frequently encountered in military, environmental, and nano-biotechnology applications. The goal is to provide a (sub-)optimal framework for sensor resource management, while placing those sensors such that they provide accurate coverage within the required location and range probability. The problem is not trivial as the sensors might not be of equal capacity, the terrain upon which the sensors are deployed might have many obstacles and some sensors might fail. In some applications, areas over the sensor field are marked preferential, with high desired probability of detection and coverage. In this paper, we propose a unique sensor placement computing framework for preferential coverage in the sensor-field, while trying to deploy minimum number of sensors. The proposed approach treats the sensor field as an image, which provides an advantage of attaining pixel-level accuracy in sensor placement. A unique algorithm is presented that initially concentrates on the preferential regions and then proceeds towards the calibration of other uncovered regions of the sensor field. Our approach has shown significant improvement in time-performance in contrast to the greedy-approach, and has a strong potential for applications in several mission-critical applications.
Advances in automated data collection tools in design and manufacturing have far exceeded our capacity to analyze this data for novel information. Techniques of data mining and knowledge discovery in large databases promise computationally efficient and accurate means to analyze such data for patterns and similar structures. In this paper, we present a unique data mining approach for finding similarities in classes of 3D models, using discovery of association rules. PCA is first performed on the 3D model to transform it along first principal axis. Transformed 3D model is then sliced and segmented along multiple principal axes, such that each slice can be interpreted as a transaction in a transaction database. Association-rule discovery is performed on this transaction space for multiple models and common association rules among those transactions are stored as a representative of a class of models. We have evaluated the performance of association rules for efficient representation of classes of shape models. The method is time and space efficient, besides presenting a novel paradigm for searching content dependencies in a database of 3D models.