We propose a novel unsupervised learning algorithm that makes use of image fusion to efficiently cluster remote sensing data. Exploiting nonlinear structures in multimodal data, we devise a clustering algorithm based on a random walk in a fused feature space. Constructing the random walk on the fused space enforces that pixels are considered close only if they are close in both sensing modalities. The structure learned by this random walk is combined with density estimation to label all pixels. Spatial information may also be used to regularize the resulting clusterings. We compare the proposed method with several spectral methods for image fusion on both synthetic and real data.
In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the mixed MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.
We develop a method for superresolution based on anisotropic harmonic analysis. Our ambition is to efficiently increase the resolution of an image without blurring or introducing artifacts, and without integrating additional information, such as sub-pixel shifts of the same image at lower resolutions or multimodal images of the same scene. The approach developed in this article is based on analysis of the directional features present in the image that is to be superesolved. The harmonic analytic technique of shearlets is implemented in order to efficiently capture the directional information present in the image, which is then used to provide smooth, accurate images at higher resolutions. Our algorithm is compared to both a recent anisotropic technique based on frame theory and circulant matrices,1 as well as to the standard superresolution method of bicubic interpolation. We evaluate our algorithm on synthetic test images, as well as a hyperspectral image. Our results indicate the superior performance of anisotropic methods, when compared to standard bicubic interpolation.
We introduce a novel method for image fusion based on wavelet packets. Our ideas yield an approach for pan-sharpening low spatial resolution multispectral images with high spatial resolution panchromatic images. Two distinct algorithms for fusing are investigated, based on which wavelet packet coefficients are mixed. We evaluate our algorithm on images acquired from Landsat 7 ETM+, showing an improvement over results achieved through more basic wavelet algorithms. We also propose the use of spectral concentration during the wavelet packet pan-sharpening process to reduce the dimensionality of the data.