Airborne automated target detection (ATD) and fusion experiments are frequently limited by the quality, quantity, and rapid availability of geo-registered multi-sensor, multi-platform imagery. This is especially true when working with mine targets that are smaller than the inertial measurement errors on airborne platforms. Working under the sponsorship of NVESD, we have developed and demonstrated an automated approach to inertially geo-register and ground truth imagery from multiple sensor modalities at accuracies on the order of an antitank mine dimension. Data types include ground penetrating, X-band, and Ku-band synthetic aperture radar, visible to near infrared (VIS/NIR) and longwave infrared (LWIR). This database is being used to support feature and decision-level sensor and algorithm fusion studies and to extract sensor utility metrics for a wide range of operational condition subspaces. In addition, we have standardized the format of the ground-truthed imagery products for dissemination to a larger algorithm development community and for compatibility with the U.S. Army Research Laboratory’s (ARL) Automatic Target Detection Evaluation Environment (ATD EvalEnv). This environment facilitates mine detection and fusion algorithm performance assessment across sensors, algorithms, and operational conditions. In this paper, we will discuss a process for fusion studies, including the ARL infrastructure and the techniques employed to collect and prepare the inertially co-registered imagery database.