The use of several images of various modalities has been proved to be useful for solving problems arising in many
different applications of remote sensing. The main reason is that each image of a given modality conveys its own part of
specific information, which can be integrated into a single model in order to improve our knowledge on a given area.
With the large amount of available data, any task of integration must be performed automatically. At the very first stage
of an automated integration process, a rather direct problem arises : given a region of interest within a first image, the
question is to find out its equivalent within a second image acquired over the same scene but with a different modality.
This problem is difficult because the decision to match two regions must rely on the common part of information
supported by the two images, even if their modalities are quite different. In this paper, we propose a new method to
address this problem.
Image registration is a major issue in the field of Remote Sensing because it provides a support for integrating information from two or more images into a model that represents our knowledge on a given application. It may be used for comparing the content of two segmented images captured by the same sensor at different times; but it also may be used for extracting and assembling information from images captured by various sensors corresponding to different modalities (optical, radar,).
The registration of images from different modalities is a very difficult problem because data representations are different (e.g. vectors for multispectral images and scalar values for radar ones) but also, and especially, because an important part of the information is different from an image to another (e.g. hyperspectral signature and radar response). And precisely, any registration process is based, explicitly or not, on matching the common information in the two images.
The problem we are interested in is to develop a generic approach that enables the registration of two images from different modalities when their spatial representations are related by a rigid transformation. This situation often occurs, and it requires a very robust and accurate registration process to provide the spatial correspondence.
First, we show that this registration problem between images from different modalities can be reduced to a matching problem between binary images. There are many approaches to tackle this problem, and we give an overview of these approaches. But we have to take into account the specificity of the context in which we have to solve this problem: we must select those points of both images that are associated with the same information, and not the other ones, in order to process the pairing that will lead to the registration parameters.
The approach we propose is a Hough-like method that induces a separation between relevant and non-relevant pairings, the Hough space being a representation of the rigid transformation parameters. In order to characterize the relevant items in each image, we propose a new primitive that provides a local representation of patterns in binary images. We give a complete description of this approach and results concerning various types of images to register.
In this paper, we propose a new formalism that enables to take into account image textural features in a very robust and selective way. This approach also permits to visualize these features so experts can efficiently supervise an image segmentation process based on texture analysis. The texture concept has been studied through different approaches. One of them is based on the notion of ordered local extrema and is very promising. Unfortunately, this approach does not take in charge texture directionality; and the mathematical morphology formalism, on which it is based, does not enable extensions to this feature. This led us to design a new formalism for texture representation which is able to include directionality features. It produces a representation of texture relevant features in the form of a surface z = f(x,y). The visualization of this surface gives experts sufficient information to discriminate different textures.