Two different visual descriptions provided by two image sensors (radar and infrared camera)
contain information of the same scene. We want to associate them, using different methods of
fusion, in order to improve our knowledge of the scene.
Two approaches are described in this paper: navigation and recognition. In the first approach, the
radar is the predominant sensor and we use cartographic information of the area to guide the fusion
process. In the second approach, we find regions of interest in the radar image that are used to
extract features in the infrared image.
To experiment our algorithm, we are using a PtSi infrared camera (3-5jtm) with a 512*5 12
matrix and a millimeterwave radar, that are looking at the same area from an airplane, to detect
objects like buildings, roads, fields ... . It is the basis of further developments within an expert
system including more complex notions of image processing objects.