Military operations in urban areas often require detailed knowledge of the location and identity of commonly occurring
objects and spatial features. The ability to rapidly acquire and reason over urban scenes is critically important to such
tasks as mission and route planning, visibility prediction, communications simulation, target recognition, and inference
of higher-level form and function. Under DARPA's Urban Reasoning and Geospatial ExploitatioN Technology
(URGENT) Program, the BAE Systems team has developed a system that combines a suite of complementary feature
extraction and matching algorithms with higher-level inference and contextual reasoning to detect, segment, and classify
urban entities of interest in a fully automated fashion. Our system operates solely on colored 3D point clouds, and
considers object categories with a wide range of specificity (fire hydrants, windows, parking lots), scale (street lights,
roads, buildings, forests), and shape (compact shapes, extended regions, terrain). As no single method can recognize the
diverse set of categories under consideration, we have integrated multiple state-of-the-art technologies that couple
hierarchical associative reasoning with robust computer vision and machine learning techniques. Our solution leverages
contextual cues and evidence propagation from features to objects to scenes in order to exploit the combined descriptive
power of 3D shape, appearance, and learned inter-object spatial relationships. The result is a set of tools designed to
significantly enhance the productivity of analysts in exploiting emerging 3D data sources.
In this work we investigate simultaneous object identification improvement and efficient library search for model-based object recognition applications. We develop an algorithm to provide efficient, prioritized, hierarchical searching of the object model database. A common approach to model-based object recognition chooses the object label corresponding to the best match score. However, due to corrupting effects the best match score does not always correspond to the correct object model. To address this problem, we propose a search strategy which exploits information contained in a number of representative elements of the library to drill down to a small class with high probability of containing the object. We first optimally partition the library into a hierarchic taxonomy of disjoint classes. A small number of representative elements are used to characterize each object model class. At each hierarchy level, the observed object is matched against the representative elements of each class to generate score sets. A hypothesis testing problem, using a distribution-free statistical test, is defined on the score sets and used to choose the appropriate class for a prioritized search. We conduct a probabilistic analysis of the computational cost savings, and provide a formula measuring the computational advantage of the proposed approach. We generate numerical results using match scores derived from matching highly-detailed CAD models of civilian ground vehicles used in 3-D LADAR ATR. We present numerical results showing effects on classification performance of significance level and representative element number in the score set hypothesis testing problem.
LADAR imagery provides the capability to represent high resolution detail of 3D surface geometry of complex targets. In previous work we exploited this capability for automatic target recognition (ATR) by developing matching algorithms for performing surface matching of 3D LADAR point clouds with highly-detailed target CAD models. A central challenge in evaluating ATR performance is characterizing the degree of problem difficulty. One of the most important factors is the inherent similarity of target signatures. We've developed a flexible approach to target taxonomy based on 3D shape which includes a classification framework for defining the target recognition problem and evaluating ATR algorithm performance. The target model taxonomy consists of a hierarchical, tree-structured target classification scheme in which different levels of the tree correspond to different degrees of target classification difficulty. Each node in the tree corresponds to a collection of target models forming a target category. Target categories near the tree root represent large and very general target classes, exhibiting large interclass distance. Targets in these categories are easily separated. Target categories near the tree bottom represent very specific target classes with small interclass distance. These targets are difficult to separate. In this paper we focus on creation of optimal categories. We develop approaches for optimal aggregation of target model types into categories which provide for improved classification performance. We generate numerical results using match scores derived from matching highly-detailed CAD models of civilian ground vehicles.
3D sensors provide unique opportunities for performing automatic target recognition (ATR). We describe an automated system that exploits 3D target geometry to perform rapid and robust ATR in the domain of military and civilian ground vehicles. The system identifies specific vehicles by comparing 3D LADAR data to model based LADAR predictions from highly-detailed CAD models with articulating parts. In addition to performing identification, the system solves for whole vehicle six-degree-of-freedom pose as well as detailed target articulation state. Because of its specificity, the identification system performs high probability of correct identification across a library of ~100 target models and exhibits robustness to occlusion, clutter and sensor noise. This identification capability is currently being extended for the purpose of classifying generic vehicle types (tanks, trucks, air defense units, etc.). The goal of the extended system is to perform vehicle classification before performing vehicle identification. This methodology provides a more flexible model-based ATR capability because it obviates the need for modeling all possible target types in advance. Classification enables the recognition of novel targets which have not been modeled or previously observed by the system. We classify targets based on general 3D morphology and characteristic 3D relationships between observed parts and features. This approach exploits the distinctive anatomy of different functional target types to achieve a more flexible and extensible target recognition capability.
LADAR imaging is unique in its potential to accurately measure the 3D surface geometry of targets. We exploit this 3D geometry to perform automatic target recognition on targets in the domain of military and civilian ground vehicles. Here we present a robust model based 3D LADAR ATR system which efficiently searches through target hypothesis space by reasoning hierarchically from vehicle parts up to identification of a whole vehicle with specific pose and articulation state. The LADAR data consists of one or more 3D point clouds generated by laser returns from ground vehicles viewed from multiple sensor locations. The key to this approach is an automated 3D registration process to precisely align and match multiple data views to model based predictions of observed LADAR data. We accomplish this registration using robust 3D surface alignment techniques which we have also used successfully in 3D medical image analysis applications. The registration routine seeks to minimize a robust 3D surface distance metric to recover the best six-degree-of-freedom pose and fit. We process the observed LADAR data by first extracting salient parts, matching these parts to model based predictions and hierarchically constructing and testing increasingly detailed hypotheses about the identity of the observed target. This cycle of prediction, extraction, and matching efficiently partitions the target hypothesis space based on the distinctive anatomy of the target models and achieves effective recognition by progressing logically from a target's constituent parts up to its complete pose and articulation state.