Automatic object recognizer algorithms designed to work with imaging sensor inputs require extensive testing before they should be considered robust enough for challenging applications such as military targeting. Testing automatic target recognition (ATR) algorithms in most cases has been limited to a handful of the scenario conditions of interest as represented by imagery collected with a desired imaging sensor. The question naturally arises as to how robust the performance of the ATR is for all scenario conditions of interest, not just for a small set of collected imagery. A way of addressing algorithm robustness is to characterize the input imagery in terms of some common information content or quality measures that can be correlated with ATR performance. This paper addresses the utility of image characterization measures in terms of estimating ATR detection performance by correlation analyses between nine different image measures and the detection responses of two ATR algorithms. Results show that an image measure called target-to-background entropy difference is the best single measure for estimating ATR detection performance, with correlation coefficients as large as 0.60.