We consider the joint inverse problems of sensor data registration and automatic target recognition. Single-platform, multi-sensor registration is posed as a model-based, data fusion problem using Bayesian and maximum likelihood frameworks. The sensor model parameters typically consist of platform pose parameters, sensor pointing angles, and internal calibration factors, and these are used to define a transformation that maps raw data recorded in the sensor frame to a ground-referenced, world coordinate system.
The fusion estimation problem is one joint inversion since the
sensor model parameters common to multiple sensors are simultaneously estimated (along with sensor-specific model parameters). For the ATR problem we pose the joint optimization problem over these sensor model parameters (constrained by the global scene) and target model parameters (e.g., for selected target chips). In addition, we pose a cooperative inversion approach that captures uncertainty from the system model estimation process for use in a refined ATR inversion. The latter consists of a search over target model parameters and a constrained system model parameter space with realization samples consistent with the estimated system model covariance. Estimation robustness is achieved through use of a hybrid global/local search method (to avoid final convergence to local minima), robust kernels that down-weight data residual outliers (generated from test and reference image feature correspondences), and the use of multi-sensor data to increase the number and diversity of data constraints. In summary, we have developed a model-based fusion approach which draws on well-developed methods in photogrammetry, computer vision and automatic target recognition for enhanced registration and recognition performance.