Automatic target recognition (ATR) has historically entailed the problems of detection, classification and tracking
of ground or air targets from high resolution (imaging) sensor data as well as low resolution (radar) sensor data.
A popular approach to solving the ATR problem is Bayesian inference, where detection (position and pose),
classification and tracking are solved via a parameter estimation framework. The present paper offers a treatment
of a subset of the aforementioned problem, which can be stated as "given imaging data of a stationary ground
target and assuming that the target centroid's position is known in pixel coordinates, how can one estimate its
pose (orientation) and class?" Furthermore, we also address the problem of scale invariance, i.e. how to ensure
that, for instance, a target that appears smaller in an image is not misclassified to a class that has a similar sized
template in the database? This problem is a very significant one since it is realistic to expect target templates in
the database to be only of a certain size and of targets in the observed image to be smaller or larger depending
on its relative distance to the camera. Hence, by proposing the use of scale as an additional parameter to be
estimated, it is shown, via simulations, that this inclusion enhances the accuracy of class estimation.