Many different automatic target recognition (ATR) approaches have had their performance quantified for over twenty years typically by plotting a receiver operating curve (ROC) of probability of detection and/or recognition versus some measure of false alarm or false alarm rate. These ROCs have been generated on static sets of test and training data. This data, in some cases, has had significantly varying levels of difficulty, however, the quantification of the data set difficulty has typically only been coarsely partitioned based on the time of day, the target operational state, the meteorological environment, and sometimes the terrain or location. In addition, there has been no generally useful comparative measure of the target signature knowledge base provided for ATR system 'training' versus the signatures of the same targets in the data used for test. In this paper, we illustrate the quantification of two 'information content' data metrics with an associated ATR performance. The first metric is a signal to clutter measure (SC), and the second is a knowledge base signature distortion measure (KBSD) of the 'closest' target training signature versus the target test signature. These metrics provide a new basis for truly objective ATR performance comparison.