We describe a general approach to the representation and recognition of 3-D objects as it applies to automatic target recognition
(ATR) tasks. The method is based on locally adaptive target segmentation, neural network classifier design, and a novel view selection mechanism
that develops ‘‘visual filters’’ responsive to specific target classes that encode the complete viewing sphere with a small number of prototypical
examples. The optimal set of visual filters is found via a crossvalidation-like data reduction algorithm used to train banks of backpropagation
(BP) neural networks. To improve recognition accuracy under noisy or occluded conditions, as well as to eliminate false alarms, the
proposed recognition system employs a temporal evidence integration technique that enables tracking and lock-on even when both targets and
camera move. Experimental results on synthetic and real-world imagery demonstrate the feasibility of our approach.