Correlation-based filters (e.g MACE, MACH) have been widely employed for automatic target acquisition. In general, a bank of filters is developed wherein each filter is trained to respond to a particular range of conditions (such as aspect angle). Individual filter outputs are utilized to determine a best match between objects in a scene and the training information. However, it is not uncommon for discrete clutter objects to correlate well with an individual filter, resulting in an unacceptable false alarm rate (FAR). It is the authors' hypothesis that although a clutter event may correlate well with an individual filter, there are discernable differences in the way clutter and targets correlate across the bank of filters. In this paper, the authors investigate a connectionist based approach that combines the individual filter outputs in a non-linear manner for improved performance. Particular attention is given to designing the correlation filter constraints in conjunction with the combination approach to optimize performance.
The problem of seamless scene integration from multiple 3-dimensional views of a location for surveillance or recognition purposes is one that continues to receive much interest. This technique holds the promise of increased ability to detect concealed targets, as well as better visualization of the scene itself. The process of creating an integrated scene 'model' from multiple range images taken at different views of the scene consists of several basic steps: (1) Matching of scene points across views, (2) Registration of the multiple views to a common reference frame, and (3) Integration of the multiple views into a complete 3D representation (such as a mesh or voxel space). We propose using a technique known as spin-map correlation to compute the initial scene point correspondences between views. This technique has the advantage of being able to perform the registration with minimal knowledge of viewing geometry or viewer location - the only requirement is that there is overlap between views. Registration is performed using the correspondences generated from spin-map matching to seed an Iterative Closest Point (ICP) algorithm. The ICP algorithm grows the list of correspondences and estimates the rigid transformation between the multiple views. Following registration of the disparate views, the surface is represented probabilistically in a voxel space that is then polygonised into a triangular facet model using the well-known marching cubes algorithm. We demonstrate this procedure using LADAR range images of an armored vehicle of interest.
Image segmentation, a key component in many Automatic Target Recognition (ATR) systems, has received considerable attention in the research community in recent years. A variety of segmentation approaches exist, and attempts have been made to combine various approaches in order to find more robust solutions. In this paper, the authors describe an inference fusion architecture for combining individual segmentation concepts which results in improved performance over the individual algorithms. We consider segmentation algorithms with several disparate cost functions as experts with a narrowly defined set of goals. The information obtained from each expert is combined and weighted with available evidence using an agent based inference system, resulting in an adaptive, robust and highly flexible image segmentation. Results obtained by applying this approach will be presented.