A multisensor suite that consists of a forward looking infrared camera, a radar, and a low light television camera is likely to be a useful component of enhanced and synthetic vision systems. We examine several aspects of signal processing needed to combine effectively the individual sensor outputs. First we discuss transformations of individual sensor image data to a common representation. Our focus is on rectification of radar data without relying on a flat-earth assumption. We then describe a novel approach to image representation that minimizes loss of information in the transformation process. Finally, we discuss na optimal algorithm for fusion of radar and infrared images.