Automatic Target Recognition (ATR) algorithm performance is highly dependent on the sensing conditions under which the input data is collected. Open-loop fly-bys often produce poor results due to less than ideal measurement conditions. In addition, ATR algorithms must be extremely complicated to handle the diverse range of inputs with a resulting reduction in overall performance and increase in complexity. Our approach, closed-loop ATR (CL-ATR), focuses on improving the quality of information input to the ATR algorithms by optimizing motion, sensor settings and team (vehicle-vehicle-human) collaboration to dramatically improve classification accuracy. By managing the data collection guided by predicted ATR performance gain, we increase the information content of the data and thus dramatically improve ATR performance with existing ATR algorithms. CL-ATR has two major functions; first, an ATR utility function, which represents the performance sensitivity of ATR produced classification labels as a function of parameters that correlate to vehicle/sensor states. This utility function is developed off-line and is often available from the original ATR study as a confusion matrix, or it can be derived through simulation without direct access to the inner working of the ATR algorithm. The utility function is inserted into our CLATR framework to autonomously control the vehicle/sensor. Second, an on-board planner maps the utility function into vehicle position and sensor collection plans. Because we only require the utility function on-board, we can activate any ATR algorithm onto a unmanned aerial vehicle (UAV) platform no matter how complex. This pairing of ATR performance profiles with vehicle/sensor controls creates a unique and powerful active perception behavior.