Most Automatic Target Recognition algorithms consist of multiple processing stages, starting with a `detector' to locate objects of potential interest within an image. Then a target `classifier' identifies these objects by assigning them to specific target classes. The classifier uses the localized information in the image to assign each object to one of a number of categories, called targets, or if the object is not classifiable, it might be rejected as not being a target. This paper focuses on the properties associated with certain types of classifiers when applied to synthetic aperture radar (SAR) imagery. A common approach to classification is to construct some type of library of known templates for the targets of interest. The objects flagged by the detector are compared to each template and, based on some figure of merit, the object is classified. A popular classification rule is to calculate the mean squared error (MSE) between the detected object and each template, and assign the object to the target type that minimizes the observed MSE. Although minimization of MSE has some intuitive appeal and is fairly easy to implement, it has undesirable properties when applied to SAR data. In this paper, we investigate the statistical properties associated with MSE classification when the underlying pixel values are drawn from a long-tailed, asymmetric distribution, as is typical for SAR data. More important, however, are the within class sources of variance that arise in realistic scenarios. These sources of variance tend to inflate the MSE, even when the candidate object is compared to the correct template. This paper explores the statistical nature of this problem and illustrates it with a series of example images.