## 1.

## Introduction

Since 25 years ago, the definition of the wavelet has led to revolutionary achievements in the development of efficient encoding of piecewise regular signals. Their ability to detect singularities much more efficiently than Fourier transform^{1} has contributed to their remarkable success. The wavelet provides an optimal sparse approximation of a wide set of signals and digitalizes the continuous domain transform through the fast algorithmic implementation.

Nevertheless, this latter fails when dealing with singularities along curves, i.e., ${C}^{2}$ functions, since reconstructing images with these anisotropic features could produce undesirable artifacts. A wavelet has only diagonal, vertical, and horizontal directions, and this limited directionality sensitivity leads to inaccurate results when dealing with multidimensional data. To overcome this drawback, some forms of directionalities have been introduced, such as the steerable pyramid,^{2} the directional filter banks (DFBs),^{3} and the two-dimensional (2-D) directional wavelet.^{4} A complexifyied version of wavelet was also proposed in Ref. 5 to tackle the directionality limitation by adding more precision to ${C}^{2}$ functions. In fact, complex wavelet offers shift-invariance and yields, at each scale, six directional sub-bands oriented at $+15$ and $-15\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{deg}$, $+45$ and $-45\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{deg}$, $+75$ and $-75\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{deg}$ (indiscriminately by ordinary wavelet).

However, it was proven through several applications that these aforementioned methods cannot represent effectively the image singularities and its anisotropic features. A real transition occurred once Candès and Donoho^{6} introduced the second generation of curvelet (CT) in 2004. Since then, other multiscale geometric decompositions (MGD) were introduced to counteract the CT’s weakness, namely redundancy. In this work, we study some of these MGD and represent the characteristics behind their success. We are also interested in their application in the field of remote sensing (RS).

This field has drawn a great interest with the development of multiple types of sensors, leading to a huge heterogeneous amount of data. To overcome the curse of dimensionality while providing a powerful tool for environment monitoring, planning, and decision-making, several techniques have been employed. Convolutional neural network or the deep learning^{7}8.^{–}^{9} represents a tendency in this field thanks to its ability to automatically discover relevant contextual features in image categorization problems.^{10} In the same context, sparse representation (SR),^{11}12.^{–}^{13} total variation,^{14}^{,}^{15} and machine learning^{16}^{,}^{17} techniques, to name a few, are also investigated for different purposes, such as enhancing spatial resolution, generating explanations, and extracting knowledge from the images. MGD, the scope of our paper, have been extensively used in several domains, namely pattern recognition^{18}^{,}^{19} and computer vision.^{20}^{,}^{21} Their use was also extended to the RS field, where we need to locate edges of roads, building, rivers, and forest to detect a potential change. Adding to that, MGD provide an analytical treatment of a scene by decomposing it in high frequencies and low frequencies. This essentially helps in fusing information from different sensors and in revealing hidden characteristics indiscriminately using only one sensor image. Therefore, we aim, in this work, at drawing attention to the MGD importance and contributions in the RS field. To the authors’ knowledge, several reviews were elaborated to describe MGD, but few of them focused on their particular use in RS.

This paper is organized as follows: we present, in the first section, an overview of MGD. Then we describe their characteristics and how they have been used in the field of RS. We conclude this paper with a discussion and a conclusion.

## 2.

## Overview of Multiscale Geometric Decompositions

The notion of scales was introduced before wavelet emergence to better localize objects in the image observation. Methods like windowing or scale introduction in Fourier transform,^{22}^{,}^{23} Laplacian pyramid (LP), and derivatives of Gaussian (DoG) were the cornerstones of ideas a few years later. The definition of the continuous wavelet was a kind of generalization of previous works in scale incorporation since the newborn transform has offered a localization in time and frequency.

For numerical computation needs, the wavelet was discretized using multiresolution analysis (MRA). The MRA consists of projecting a signal successively onto subspaces ${V}_{j}$ (as $j\in \mathbb{Z}$ increases, the approximation becomes coarser) and yielding both approximation and detail information (detail information is calculated from two successive approximations). Otherwise, MRA stipulates that any signal, in our case an image, could be constructed iteratively by exhibiting different characteristics in every scale. Figure 1 shows how smooth regions are emphasized in finest scale while contours are more likely to be salient in the coarsest ones. Taking advantage of the MRA, wavelet transform can address point-like singularities and offer less redundant information in scales, compared to LP and DoG. Besides, wavelet is characterized by being separable, which means that its 2-D function atoms could be written as the product of two other 1-D functions. Precisely, a wavelet’s atom $\psi (x)={2}^{-j}{\psi}^{k}({2}^{-j}x-n)$ is obtained by three tensor products 2-D wavelets as follows:

• ${\psi}^{V}(x)=\psi ({x}_{1})\phi ({x}_{2})$

• ${\psi}^{H}(x)=\phi ({x}_{1})\psi ({x}_{2})$

• ${\psi}^{D}(x)=\psi ({x}_{1})\psi ({x}_{2})$

where $x=({x}_{1},{x}_{2})\in {\mathbb{R}}^{2}$ refers to spatial coordinates of a given point $x$, ${2}^{-j}$ refers to dyadic scalings, $j\in \mathbb{Z}$ refers to scale, $n\in \mathbb{N}$ is translation factor, $\phi $ is 1-D orthogonal scaling (acts like low-pass filter), $\psi $ is wavelet function (acts like band-pass filter), and $k$ stands for the only three directions supported by wavelets (horizontal, vertical, and diagonal).

The MGD were introduced to further improve the aspect of directionality limitation, whether by using the wavelet itself combined with other treatments or by defining rules for wavelet functions. These methods can be divided into two different families:

• adaptive family, which follows a Lagrangian approach, such as bandlet and

• nonadaptive family, which follows the Eurelian approach, such as CT.

The first family proposes to build an adapted structure of the data. For example, in order to locate singularities, the bandlet decomposes the image using wavelets combined with dyadic decompositions. Each square of the resulted segmentation contains one direction, i.e., a unique piecewise singularity. However, the second family tries to ameliorate wavelet by defining additive mathematical rules on the wavelet functions. This brings more accuracy to detect image content. A debate was settled on the strength of those families and their abilities in image understanding: smooth regions, contours, and texture. According to Ref. 24, the best representation is left an open question since it is data dependent.

In this section, we represent some of the MGD and describe their main characteristics and differences. Particularly, in this paper, we are interested in some MGD that are applied in the RS field. Figure 2 illustrates a map of MGD elaborated based on Ref. 24. In fact, we have written the names of methods that will be discussed in bold, so that the reader can have a clear vision of what is treated in the coming sub/sections.

## 2.1.

### Curvelet Transform

The first generation of CT was proposed by Donoho and Duncan.^{25} By that definition, this transform extended the ridgelets (built on the ground of 1-D wavelets to detect segment singularity)^{26} to switch from a global scale analysis to fine scale analysis. The idea was to decompose the image into a set of subimages and perform ridgelet transform on each one of them.

This CT transform has not shown great results because of the failure of ridgelets in diagnosing the ${C}^{2}$ edges. The second generation of CT is then proposed and was less redundant and faster compared to its first generation.^{27} Given a function $f\in {L}^{2}({\mathbb{R}}^{2})$, the coefficients $\mathcal{C}(j,l,k)$ of the continuous CT transform are defined by Eq. (1):

## (1)

$$\mathcal{C}(j,l,k)=\u27e8f,{\varphi}_{j,l,k}\u27e9={\int}_{{\mathbb{R}}^{2}}f(x){\varphi}_{j,l,k}(x)\mathrm{d}x,$$CT’s atoms are defined by a combination of a translation ${x}_{k}^{(j,l)}$ and a rotation ${R}_{{\theta}_{l}}$ of an angle ${\theta}_{l}$ of an atom ${\varphi}_{j,l,k}(x)$:

The rotation with an angle ${\theta}_{l}$ is defined as in Eq. (3):## (3)

$${R}_{{\theta}_{l}}=\left(\begin{array}{cc}\mathrm{cos}\text{\hspace{0.17em}}{\theta}_{l}& \mathrm{sin}\text{\hspace{0.17em}}{\theta}_{l}\\ -\mathrm{sin}\text{\hspace{0.17em}}{\theta}_{l}& \mathrm{cos}\text{\hspace{0.17em}}{\theta}_{l}\end{array}\right)\phantom{\rule[-0.0ex]{1em}{0.0ex}}et\text{\hspace{0.17em}\hspace{0.17em}}{R}_{{\theta}_{l}}^{-1}=({R}_{-{\theta}_{l}}).$$The atom of CT ${\varphi}_{j}$ is defined in the frequency domain by the mean of its Fourier transform $\widehat{{\varphi}_{j}(\omega )}={U}_{j}(r,\theta )$, which is written in polar coordinates as follows:

## (4)

$${U}_{j}(r,\theta )={2}^{\frac{-3j}{4}}{W}_{j}({2}^{-j}r){V}_{j}\left(\frac{2\lfloor \frac{-j}{2}\rfloor}{\theta}\right),$$To discretize the CT, the authors of this transform propose to switch from rotated tiling to angular tiling and from concentric circles to concentric squares, as illustrated in Fig. 3.

## 2.2.

### Contourlet Transform

Contourlet is considered as the extension of Candes and Donoho’s work to improve the isotropic criterion of wavelet representation.^{28} This transform aims at obtaining the same frequency space tiling as the CT, without the need to move from continuous to discrete domain. To do so, the authors propose a nonseparable decomposition scheme by applying a DFB, combined with LP [as shown in Fig. 4(a)] and they also propose to decrease the high redundancy information in its sub-bands.

The DFB is designed to capture anisotropic features in the high frequencies of the image by allowing different numbers of direction at each scale while the low frequencies are processed by LP. In fact, as presented in Ref. 28, LP is defined based on orthogonal filters and downsampling by 2 in each iteration, similar to wavelet. The low-pass synthesis filter G, used for LP processing, defines a unique scaling function $\phi (t)\in {L}^{2}({\mathbb{R}}^{2})$, $t\in \mathbb{Z}$ that satisfies Eq. (5):

## (5)

$$\phi (t)=2\sum _{n\in {\mathbb{Z}}^{2}}g[n]\phi (2t-n)\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{and}\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{Let}:{\phi}_{j,n}={2}^{-j}\phi \frac{t-{2}^{j}n}{{2}^{j}},$$Otherwise, the DFB offers localization and directionality properties due to the family, defined in Eq. (6):

## (6)

$$\{{d}_{k}^{(l)}[n-{S}_{k}^{(l)}m]\}\phantom{\rule[-0.0ex]{1em}{0.0ex}}0\le k<{2}^{l},\phantom{\rule[-0.0ex]{1em}{0.0ex}}m\in {\mathbb{Z}}^{2},$$^{29}discards the sampling step and is characterized by being shift-invariant and by preserving multiresolution criteria.

## 2.3.

### Shearlet Transform

Shearlets^{30} provide a rich mathematical structure and are optimally sparse in representing the edges within an image thanks to the fact that they form a tight Parseval frame at various scales and directions. They are distinguished from CTs by being directly constructed in discrete domain, which gives them the ability to provide a more efficient multiresolution representation of the geometry.^{31} Besides, shearlet outperforms contourlet by substituting the directional filter with shear filter, which helps in breaking the limitation of directionalities. In the continuous domain, in dimension 2, a shearlet represents an affine system with composite dilations in Eq. (7):

In fact, for a given point in the frequency domain $\xi =({\xi}_{1},{\xi}_{2})\in \widehat{{\mathbb{R}}^{2}}$, $j\ge 0$, and $l=-{2}^{j},\dots ,{2}^{j}-1$, let

## 2.4.

### Wedgelet Transform

Wedgelet is proposed by Donoho et al.^{34} This transform decomposes the image iteratively into piecewise constant functions (class of horizon functions). The first step consists of a dyadic recursive decomposition. In each quadtree leaf, wedgelet will search for an “edgel” in order to forward the leaf dyadic decomposition. Figure 6(a) represents a wedgelet decomposition of “cameraman” image and its dyadic decomposition.

The intuitive evolution of this transform is to switch the constant horizon functions by polynomial functions in order to catch more irregular singularities. In fact, this was proposed with the surflet.^{36} Figure 6(b) illustrates how the dyadic decomposition is influenced when polynomial functions are used instead of piecewise constant functions.

## 2.5.

### Bandlet Transform

The first generation of the Bandlet transform was proposed by Le Pennec and Mallat.^{37} The main idea was to explore the wavelet transform and overcome its failure in detecting anisotropic regularities, such as ${C}^{2}$ curvatures. According to Ref. 37, Bandlet first performs the grassfire algorithm^{38} to detect edge regions and then they perform a deformation to let it fit wavelet orientations (horizontal, vertical, or diagonal). This version was redundant and failed to ensure good results, mainly in compression applications.^{35}

Thus, the second generation of bandlet is proposed. Respecting the same adaptive approach, this version is considered as an anisotropic wavelet wrapped along the geometry flows.^{39} This version differs from its previous version by being based on quadtree segmentation algorithm. The idea is first to estimate irregularities in the image as a vector field, generated using the edge polynomial function estimation. A wavelet transform is then applied, after that a dyadic segmentation is performed in wavelet domain. This way, wavelet coefficients are calculated along the optimally found direction in each square. The bandlet could also be seen as a polynomial approximation in an orthogonal Alpert basis. Alpert transform is simply a wrapped wavelet transform adapted to a given irregular form, as illustrated in Fig. 7. The coefficients of the bandlet are calculated by following Eq. (8):

## 3.

## Characteristics of the Multiscale Geometric Decompositions

There is no doubt that wavelet was a genuine processing tool that has led to major advances in natural image representation and understanding. Since then, many transformations have been defined to overcome the shortage of wavelet capacities by adding directions, shift invariance criteria, and so on. In Table 1, we summarized the characteristics of different transforms mentioned in this review. We have separated the MGD into two families. Their differences reside on the fact that the adaptive family constructs a special data decomposition that can fit each dataset rather than using a predefined system like the nonadaptive families. However, this adaptive form is very intensive in terms of numerical computation.^{1}

## Table 1

MGD characteristics.

Nonadaptive family | |||
---|---|---|---|

Name | Authors | Description | Conception |

CT | Emmanuel Candes and David Donoho | Can characterize | Evolution of ridgelet transform |

Cannot characterize oscillating textures directionality sensitive | Redundant | ||

Contourlet | Minh N. Do and Martin Vetterli | Evolution of isotropic wavelet transform | Use of nonseparable filters |

Can characterize | |||

Cannot characterize oscillating textures directionality sensitive sampling discards low frequencies information | Filter banks-based transform | ||

Shearlet | Demetrio Labate, Wang-Q Lim, and Glenn Easley | Evolution of CT more directionalities than the CT cannot characterize oscillating textures | Substituting the rotation and anisotropic stretch with anisotropic shears |

Adaptive family | |||

Wedgelet | David Donoho | Use a wavelet quadtree segmentation and grassfire algorithm to detect geometric singularities | A leaf of quadtree supports only horizon function |

Bandlet | Peyre, Le Pennec and Mallat | Extends the isotropic wavelets quadtree segmentation | Wrapping wavelet coefficients along geometric flows |

The second generation is less redundant than the first generation |

But, in general, all these transforms exhibit interesting parameters, which helps to bring attention to different details within an image.

•

**Scale**: MGD locates, in each scale, a specific feature (edge in the coarsest scale, smooth content in the finest). Keeping some scales while withdrawing others could help remove undesirable information. Indeed, discarding disturbance that could be part of the finest scales helps in providing a better representation of the image•

**Direction**: Likewise, directions reveal features in the image. We can detect, thanks to this parameter, the main axes by locating the accumulation of strong coefficients magnitude. Consequently, we can enhance some image structures. Figure 8 illustrates a rendering of the content of scales and supported directions of some MGD.•

**Magnitude**: In transformation domain, the greatest magnitude values could be interpreted as an influent part in the processed image. This is why the magnitude is useful for feature extraction applications. In fact, since the MGD allow the zooming in and out, the content of the image, especially texture, will not be affected by the size of the neighborhood. Several first-order and second-order statistics could be calculated: mean, standard deviation (SD), energy, entropy, contrast, sum of mean, variance, and so on. Consequently, several descriptors could be extracted and could be dimensionally reduced without affecting the discriminative power.^{42}Another interesting use of magnitude is detecting spatiotemporal directions as in Ref. 43.

•

When $A=B$, this is called a tight-frame, and if $A=B=1$ this is called a Parseval frame. The advantage of having such a function system is to be able to represent a signal as a linear combination of the vectors within the frame in several ways and to reconstruct it by the use of inner products [Eq. (10)]. Let $S={T}^{*}T$, where $T$ is the analysis operator and ${T}^{*}$ is the synthesis operator of a given frame. $T$ and ${T}^{*}$ will be explained in “sparsity” paragraph:**Frames**: CT, shearlet, and contourlet are a set of functions forming a tight frame. This means that for a given sequence ${(\varphi i)}_{i\in I}$ in Hilbert space $\mathcal{H}$, where $I$ is a countable indexing set, $i$ is an element of $I$, and for all $x$ in $\mathcal{H}$, there are two constants $0<A\le B$:

The authors of bandlet have also proposed its tight frame version, which is called grouplet.^{44} For further details on this frame version, we refer the interested reader to Ref. 45.

•

**Redundancy/overcompleteness**: Although the orthonormal/orthogonal bases has interesting properties, they have several weaknesses. They are translation and rotation sensitive, especially when it comes to dealing with multidimensional data. That is why the MGD, such as curvelets, shearlets, and contourlets, are redundant. This means that the vectors of their basis are linearly dependent and exceeded the number of their space dimension. Thus, a given vector in this kind of space would have an infinite set of representations. In fact, the authors of these transforms want to move “cautiously” to overcompleteness without losing the interesting features of the orthonormal/orthogonal basis.•

**Sparsity**: The aforementioned MGD have two main operators: the analysis $T$ and the synthesis ${T}^{*}$. Given a signal $x$, MGD analyze $x$ by decomposing it in a set of coefficients:## (11)

$$T:\mathcal{H}\to {\ell}^{2}(I)\phantom{\rule{0ex}{0ex}}x\mapsto {(\u27e8x,\varphi i\u27e9)}_{i\in I}.$$The dual operator, the synthesis, helps in recovering a given $x$:

Curvelets, contourlets and shearlets use fast algorithms to resolve Eq. (11). A suitable choice of a representation system enables a SR which means that a given signal $x$ could be expressed in a low-dimensional space. This representation is widely used in several approaches such as compressed sensing.## (12)

$${T}^{*}:{\ell}^{2}(I)\to \mathcal{H}\phantom{\rule{0ex}{0ex}}[{({c}_{i})}_{i\in I}]\mapsto \sum _{i\in I}{c}_{i}\varphi i.$$^{46}Figure 9 shows the huge difference between representing an image using histograms of a gray level and representing it sparsely using curvelets (as an example of the effect of the use of MGD).

## 4.

## Use of Multiscale Geometric Decompositions in Remote Sensing Field

RS images represent an efficient tool to analyze climate change and dynamics of land cover. In the last decades, a great number of new satellites has been launched, allowing a tremendous data availability with improved spatial and spectral resolutions. This has helped in enhancing our understanding and control of our surroundings. Throughout the various RS applications (image fusion, enhancement, super resolution, and so on), we are capable of monitoring the earth’s surface, predicting changes, and preventing disasters.

Whereas, the management of RS images represents a challenging task. As the amount of data is incredibly growing, it is getting more complex to extract knowledge from this type of images. RS data are not only different but also have a rapid velocity and generally need several complicated corrections. That is why researchers seek new approaches to replace the traditional data processing algorithms.

MGD were used in an attempt to solve some of the aforementioned problems. In fact, being able to represent the datasets in a sparse way while describing accurately, the objects in the image make the MGD an inspirational tool to be exploited in RS field. In this section, we are interested especially in the use of wavelet, CT, contourlet, shearlet, and bandlet.

## 4.1.

### Classification

The classification aims at recognizing the class label of a given study area with the aid of ground truth data. Conventional classification algorithms exploit essentially the spectral information of the images.^{48} Nevertheless, it has been proven that incorporating spatial information in the classification process, i.e., taking the contextual information into account, helps significantly in boosting the obtained accuracies. The spectral–spatial combination is possible using MGD, which not only provide a frequency representation but also help in establishing correlations between neighbors in the spatial domain.

From the studied cases, we can separate the MGD use in classification in two categories:

• Decomposing the whole image and using its frequency details for classification

• Combining the MGD frequency details with other features obtained using other methods.

We have studied the Ref. 49 where wavelet has been used for a feature extraction purpose. Combined with fuzzy hybridization, the wavelet coefficients are exploited for classification. The three multispectral (MS) RS images, two from Indian remote sensing and one from SPOT, are decomposed band-by-band. This technique has essentially helped in accounting for contextual information. The overall classification accuracy ($\text{Mean}=79.15\%$) using ground truth of the three images has shown that the biorthogonal wavelet exceeded the other wavelet functions.

In Ref. 50, the authors proposed a CT-based approach to extract features from SAR images that can effectively identify the dynamic ice from the consolidated one. The representation of the dynamic ice contains curves in different locations with different widths and lengths and is considered larger than those of consolidated ice representation. The proposed approach consists of extracting patches from the transform domain sub-bands according to a specific size. This later increases according to the CT’s scale in order to capture more information about the curves. Once the size of patches is computed, a feature vector is calculated by Eq. (13):

where ${P}_{L}$ is the patch with a size $L$, ${F}_{i,j}$ denotes the mean of CT coefficients at scale $i$, and ${\theta}_{i}\in \mathbb{N}$ is the total number of orientations at scale $i$.Using SVM along with different features, the obtained experimentation results proved that the CT-based feature extraction is effective for classification since the dynamic ice area is more accurately classified. In fact, thanks to the probability density function (PDF), we can see clearly that the component representing the different types of ice could be distinguished and thus separated (Fig. 10).

Nevertheless, this quality of recognition loses its precision in CT’s finer scale.

The contourlet is used in Ref. 48, where a performance comparison between wavelet and contourlet is discussed. Based on the fact that wavelet suffers from its shortage of directionality and the fact that contourlet provides directions only in high frequency coefficients, a wavelet-based contourlet transform (WBCT) is proposed and is applied on linear imaging self-scanner (LISS) II, III, and IV. Figure 11 illustrates the proposed transform.

After decomposing the image using the WBCT, wavelet, and contourlet, PCA is applied to the obtained features in order to reduce their dimensionality while removing redundancy and preserving the most discriminant ones. Then a mean vector and a covariance matrix are calculated. Finally, the Gaussian kernel fuzzy C-means classifier is applied, and the obtained overall classification accuracy proves that the proposed decomposition (overall accuracy of $\mathrm{LISS}\text{\hspace{0.17em}}\mathrm{IV}=89.57\%$) is better than the wavelet-based (overall accuracy of $\mathrm{LISS}\text{\hspace{0.17em}}\mathrm{IV}=87.62\%$) and contourlet-based feature extraction methods (overall accuracy of $\mathrm{LISS}\text{\hspace{0.17em}}\mathrm{IV}=88.38\%$).

As we mentioned earlier, the second category combines MGD sub-bands as features describing edges or textures, with features obtained using other methods. For example, the authors of Ref. 51 suggest to use CT jointly with morphological component analysis (MCA) and to improve RS classification using hyperspectral AVIRIS and AirSAR images. The idea is about separating a given image into two components (14):

where ${y}_{\mathrm{s}}$ is the smooth component, ${y}_{\mathrm{t}}$ is the texture component, and $n$ is the noise. Here, the CT is used to construct a dictionary ${A}_{s}$ representing the smooth component and a Gabor wavelet is used to construct a dictionary ${A}_{\mathrm{t}}$ representing the texture component. The authors propose to estimate the component ${y}_{\mathrm{s}}$ and ${y}_{\mathrm{t}}$ by solving Eq. (15) using SunSAL algorithm.^{52}

## (15)

$$\u27e8\widehat{{y}_{\mathrm{s}}},\widehat{{y}_{\mathrm{t}}}\u27e9={\mathrm{argmin}}_{{y}_{\mathrm{s}},{y}_{\mathrm{t}}}\text{\hspace{0.17em}}\frac{1}{2}{\Vert y-{y}_{\mathrm{s}}-{y}_{\mathrm{t}}\Vert}_{2}^{2}+{\lambda}_{1}\Vert {T}_{\mathrm{s}}{y}_{\mathrm{s}}\Vert +{\lambda}_{2}\Vert {T}_{\mathrm{t}}{y}_{\mathrm{t}}\Vert ,$$This approach was extended in Ref. 53, where the authors combined several methods such as Gabor wavelet and horizontal filters to construct an MCA kernel for feature extraction. The use of such a composite kernel has given better results (overall accuracy of AVIRIS $\text{Indian pines}=93.54\%$) in characterizing image content compared to minimum noise fraction (MNF) components and helped in enhancing the characterization of the image’s content. This is explained by the fact that the proposed approach combines several methods, such as wavelet and CT.

In a similar fashion, the authors of Ref. 54 proposed to combine multiple features pertaining to spectral, texture, and shape, and proposed a multiple feature combining (MFC) framework, as shown in Fig. 12. The spectral feature of a given pixel is elaborated by arranging its digital number in all of the $l$ bands. The texture feature is obtained by applying 2-D Gabor wavelet filter and the shape feature is constructed due by the pixel shape index method.^{55} To calculate the feature vectors using the MFC framework, a single feature-based dimensionality reduction technique is conducted in order to generate the alignment matrix. Then a Lagrangian function is calculated in order to determine the low-dimensional feature space $Y$ (16):

## (16)

$$L(\omega ,\lambda )={\mathrm{\Sigma}}_{i=1}^{m}{\omega}_{i}^{r}tr(Y{M}_{(i)}{Y}^{T})-\lambda ({\mathrm{\Sigma}}_{i=1}^{m}-1),$$## 4.2.

### Change Detection

The detection of a change in land cover or in the Earth’s surface is considered as one of the most important applications in RS. In fact, it helps in disaster management, vegetation development, deforestation detection, and urban growth tracking. This application needs a set of multitemporal satellite images. Throughout the studied cases in this section, MGD were extensively used to ensure an accurate detection of affected regions in multitemporal images by enhancing the image’s details, especially the difference images (DIs), where noise information can easily be interfered.

In Ref. 56, the contourlet is used to denoise the images of interest in order to enhance the change detection process. First, it is applied to each single temporal SAR image to preserve its features and edges. The authors also proposed to reduce speckle noise by performing hard thresholding on its high frequency sub-bands using Eq. (17):

where ${\delta}_{{f}_{i,j}}^{2}(m)$ is the variance magnitude, $L(m)$ is the number of the decomposition coefficients at the scale $m$, $g$ is the magnitude value of the pixel, and $l$ refers to the direction. Then the best decomposition scale is calculated by finding the minimum of the local variance of river courses magnitude. Figure 13 illustrates how the river courses are represented in SAR images and their rendering after applying contourlet.After that, markers are extracted from the contourlet domain in order to find the potential course rivers and eliminate the false alarms (content detected as river courses while it is not). After being processed, the SAR images become smoother than the ones before applying the noise and the speckle reduction. In contrast, the edges are preserved and become more visible. That is why the authors affirmed that this contourlet-based method can achieve higher accuracy of river courses detection.

In Ref. 57, the CT is used jointly with MCA and is applied to a high-resolution airbone SAR image to detect changes in urban areas acquisition. The MCA decomposes the image into $K$ different components, as specified in Eq. (18):

## (18)

$$\underset{{\alpha}_{1\dots K}}{\mathrm{min}}\text{\hspace{0.17em}}\sum _{k=1}^{K}{\Vert {\alpha}_{k}\Vert}_{1}^{1}\phantom{\rule[-0.0ex]{1em}{0.0ex}}\mathrm{s.t.}\text{\hspace{0.17em}\hspace{0.17em}}{\Vert y-\sum _{k=1}^{K}{\psi}_{k}{\alpha}_{k}\Vert}_{2}<\sigma ,$$## (19)

$$D{I}_{\mathrm{MCA}}={\psi}_{\text{curvelet}}{\alpha}_{\text{curvelet}}+{\psi}_{swt-db4}{\alpha}_{swt-db4},\phantom{\rule{0ex}{0ex}}D{I}_{d-\mathrm{MCA}}={\psi}_{\text{curvelet}}{\alpha}_{\text{curvelet}}^{d}+{\psi}_{swt-db8}{\alpha}_{swt-db8}^{d},$$To suppress the undesirable information, the authors applied a soft-threshold on the CT and wavelet coefficients. Then they calculated the change map, as illustrated by Eq. (20):

## (20)

$$G=F\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{and}\phantom{\rule[-0.0ex]{1em}{0.0ex}}{F}_{\mathrm{Curv}},$$## (21)

$$F={F}_{1}\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{and}\phantom{\rule[-0.0ex]{1em}{0.0ex}}{F}_{2}=(|{\mathrm{DI}}_{\mathrm{MCA}}|\le T1)\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{and}\phantom{\rule[-0.0ex]{1em}{0.0ex}}(|{\mathrm{DI}}_{d-\mathrm{MCA}}|\uparrow L)\le T1),$$## (22)

$${F}_{\mathrm{curv}}=(|{\psi}_{\mathrm{curv}}{\alpha}_{\mathrm{curv}}|\le T1)\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{and}\phantom{\rule[-0.0ex]{1em}{0.0ex}}(|{\psi}_{\mathrm{curv}}{\alpha}_{\mathrm{curv}}^{d}|\uparrow L)\le T1.$$In Ref. 58, the authors detected anomaly in hyperspectral images by, first, using the shearlet to decompose the images into several directional sub-bands at multiple scales. Then, in each sub-band, the background signal is reduced and the fourth-order partial differential equation is applied to brighten up the anomaly. Experimental results with HYDICE HI data show that the presented algorithm can suppress the background, detect the anomaly signal effectively, and outperform the original RX algorithm.^{59}

Another approach is proposed using a detail-enhancing approach using NSCT Ref. 60, where the authors suggested to detect change from multitemporal optical images using a detail-enhancing approach. To do so, they studied and analyzed two change detection methods. The first^{61} is based on combining PCA and K-means, which was efficient in terms of computational time, but since the PCA does not consider multiscale processing, when fusing data, this has yielded false detections. The second method was proposed by Ref. 62 and introduces a multiscale framework for change detection in multitemporal SAR images. This method decomposes a DI into $S$ scales by the undecimated discrete wavelet transform (DWT). Based on this decomposition, the authors extracted a feature vector by first computing the intrascale features (sampling local neighborhood using two methods) in each sub-band in a given scale, and then by computing the interscale features, which regroup all the intrascale vectors together.

The same authors improved this work in Ref. 63 by proposing the use of CT-DWT (complex wavelet) to capture further directions. The authors also used the Bayesian inference to calculate a threshold to decide whether the details to fuse correspond to changed or unchanged pixels.

In Ref. 60, both works Refs. 61 and 62, were discussed. The authors estimated that the use of K-means algorithm, in Ref. 61, can stick to local optima, which can produce false detections, especially when choosing an inappropriate initial centroid. Moreover, they also affirmed that the use of wavelet, in Ref. 62, failed to characterize the geometric details of the DI efficiently. To overcome this weakness, instead of wavelet, the authors used the NSCT to decompose the DI into $S$ scales. Each scale yielded one low pass approximation sub-band ${D}_{S}^{A}$ and $L$ high-pass directional sub-bands ${D}_{\mathrm{1,1}}^{H}\dots {D}_{S,L}^{H}$. The details are extracted from directional sub-bands. The high-pass directional sub-bands serve to extract the details and are thresholded to decrease the amount of noise. The intrascale fusion is performed using max rule, as shown in Eq. (23):

## (23)

$${D}_{s}^{\text{Detail}}(i,j)=\mathrm{max}{\{\mathrm{abs}[{D}_{s,l}^{H,T}(i,j)]\}}_{l=1}^{L},$$## (24)

$${D}^{\text{Detail}}(i,j)=\mathrm{max}{\{\mathrm{abs}[{D}_{s}^{\text{Detail}}(i,j)]\}}_{s=1}^{S}.$$Then an enhanced DI is calculated [Eq. (25)]:

where ${D}^{\text{Base}}$ is obtained from the finest scale approximation from NSCT decomposition and $\beta $ is a weight to balance the emphasis between the base image and the detail image. Then the authors extract patches from ${D}^{\mathrm{Enhanced}}$, transform the patch into vector via lexicographic ordering and use PCA to produce principal components. Finally, a PCA-guided K-means^{64}is performed to calculate the change map. Compared to approaches based on EM, on Bayesian, on PCA, or on multiscale, the proposed approach gives better results since it conserves the geometric details due to the NSCT. The results of experimentations are illustrated in Fig. 14.

In Ref. 65, the proposed method applies wavelet on MS imagery in an anisotropic diffusion aggregation. The proposed approach is composed of three steps. We only cite, in depth, the steps where wavelet is used:

• A band selection is first conducted to choose where the texture is best captured. Then, 2-D DWT for multiscale-multidirectional texture extraction is applied.

• Textural and spectral segmentation are performed by anisotropic diffusion in order to reduce noise without blurring inter-region edges as well as creating the desired multiscale low-level primitives.

• Change detection is then performed in two steps:

The experiments are conducted on Landsat TM and Landsat ETM+ datasets dated 1986 and 2001, respectively. The criteria of assessment used in this work are the transformed divergence measure which statistically determines the adequate wavelet levels to keep, overall accuracy and kappa.

In Ref. 66, a CT-based change detection algorithm is proposed between two co-registered SAR images for natural disaster mapping. After applying the CT on the two images, the coefficients are weighted to suppress noise-like structures using the mathematical relation [Eq. (26)]:

where ${L}_{x,y}$ is the amplitude found at position $(x,y)$ in the SAR image, ${C}_{i}$ are the sum of CT coefficients, $n$ is the number of CTs coefficients, and ${k}_{i}$ is a complex coefficient varying according to the image content. Finally, the change in radar amplitude ${D}_{x,y}$ is calculated by Eq. (27):## 4.3.

### Fusion

Image fusion represents one of the most important RS applications. It aims at combining two or more images to create another one with enhanced features. This increases the possibility of taking advantage of multisensors images and opens the door for several uses. Generally speaking, fusion can be categorized into four families based on when the fusion rule is applied: fusion at signal level, at pixel level, feature level, and decision level. The MGD belong to the feature level family. In fact, in order to conduct a fusion process with these latter decompositions, we should first convert the intrinsic image properties and get the frequency coefficients. Fusion techniques, based on MGD, suggest several injection models but tend basically to extract the spatial detail information from high spatial resolution images to inject them in the low spatial resolution ones. Compared to the largely used methods like intensity hue saturation (IHS) and PCA, MGD enhance the spatial characteristics of fused images. Thus, they look very clear, have sharp edges, and are free of spectral distortion.^{67} MGD-based fusion are applied abundantly in pansharpening techniques, multisensor fusion, and spatial enhancement.

In Ref. 68, the authors propose a technique to fuse MS satellite images (high spectral and low spatial resolution) with a panchromatic (PAN) satellite image with low spectral and high spatial resolutions. They introduced an improved method of image fusion based on the ARSIS concept using the CT transform. The ARSIS is a French abbreviation for enhancing spatial resolution by injecting structures. CT-based image fusion has been used to merge a Landsat enhanced thematic mapper plus, PAN and MS images. Based on experimental results using indicators of bias, the CT-based method provides better visual and quantitative results for RS fusion. The indicators are

• SD which indicates the dispersion degree between the gray values and the gray mean values,

• correlation coefficients (CCs) which indicate the degree of correlation between two images,

• the relative average spectral error characterizes the average performance of a method in the considered spectral bands,

• relative global dimensional synthesis error (ERGAS) which measures the spectral distortion between the reference image and the fused one.

Shearlet and nonsubsampled shearlet transform (NSST) are also used in this regard. In Ref. 69, the authors propose in the first step to register two RS images. Then the shearlet transform is applied. The number of directions is set to 6 and scales to 5. The low frequency coefficients are selected based on the average rule while the high frequency coefficients are chosen using the maximum absolute value rule as mentioned in Eq. (28):

## (28)

$$F(i,j)=\{\begin{array}{cc}A(i,j)& {D}_{A}(i,j)\ge {D}_{B}(i,j)\\ B(i,j)& {D}_{A}(i,j)\le {D}_{B}(i,j)\end{array},$$In Ref. 70, the authors have used NSST.^{32} They have involved the SR^{71} paradigm in pansharpening application.^{72} The idea is about fusion the intensity component (I) of the MS image (the hue and saturation components are not used) with the PAN image. First, the authors decompose the images using NSST into low frequencies $\{{L}_{I},{L}_{\mathrm{PAN}}\}$ and high frequencies $\{{H}_{I}^{(s,d)},{H}_{\mathrm{PAN}}^{(s,d)}\}$, where $s$ corresponds to scale and $d$ to direction. For the low frequency coefficients, they propose to construct vectors using patches from both ${L}_{I}$ and ${L}_{\mathrm{PAN}}$. Those vectors are represented sparsely using a learned dictionary with K-SVD^{73} as mentioned in Eq. (29).

## (29)

$${\alpha}^{i}={\mathrm{argmin}}_{\alpha}{\Vert \alpha \Vert}_{0}\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{subject to}\text{\hspace{0.17em}\hspace{0.17em}}{\Vert {v}^{i}-D\alpha \Vert}_{2}^{2}\le \epsilon ,\phantom{\rule[-0.0ex]{1em}{0.0ex}}\epsilon \ge 0,$$## (30)

$${\alpha}_{F}^{i}=\{\begin{array}{cc}{\alpha}_{I}^{i}& \text{if}\text{\hspace{0.17em}\hspace{0.17em}}|{\alpha}_{I}^{i}|\ge |{\alpha}_{\mathrm{PAN}}^{i}|\\ {\alpha}_{\mathrm{PAN}}^{i}& \text{otherwise}\end{array}.$$Then the new fused vector patch ${v}_{F}^{i}$ is reconstructed using the dictionary $D$ and the fused sparse coefficients are obtained as follows (31):

Otherwise, the high frequency coefficients are fused according to large local energy rule. This means that the energy of each scale $s$ and direction $d$ in the shearlet transform is calculated, as mentioned in Eq. (32) and then fused:

## (32)

$${H}_{F}^{(s,d)}(i,j)=\{\begin{array}{cc}{H}_{I}^{(s,d)}(i,j)& \text{if}\text{\hspace{0.17em}\hspace{0.17em}}{\mathrm{LE}}_{I}^{(s,d)}(i,j)\ge {H}_{\mathrm{PAN}}^{(s,d)}(i,j)\\ {H}_{\mathrm{PAN}}^{(s,d)}(i,j)& \text{otherwise}\end{array}.$$The experimental results show that the proposed method conserves better spatial and spectral information and is able to enhance the fused image better than Refs. 74 and 75. The render result is illustrated in Fig. 16. We can notice how well preserved the spatial details and color information are. Fused images have better visual accuracy compared to existing methods: AIHS,^{76} AWLP,^{75} SVT,^{74} and ATWT.^{77}

The contourlets were also used^{78} in the same context to merge SPOT and ALSAT-2A images. The authors advanced a conception of a new fusion scheme by combining the PCA and the NSCT in order to overcome the spectral distortion caused by the PCA. The aim of this work was to find a compromise between enhancing the spatial resolution and preserving the spectral information at the same time. After decomposing the MS using PCA, a histogram matching is applied to adapt the contrast between PAN and the resulting components. Then the obtained components and PAN image are decomposed using NSCT into approximation coefficients app and details coefficients det. The fusion rules applied to this approach are represented by both Eqs. (33) and (34), where authors extract the approximation coefficients of the fused image ${\mathrm{reconst}}_{\mathrm{im}}$ from the PAN image and the detail coefficients from the MS image.

## (33)

$${\mathrm{I}\mathrm{m}}_{\mathrm{a}\mathrm{p}\mathrm{p}}={\mathrm{a}\mathrm{p}\mathrm{p}}_{\mathrm{p}\mathrm{a}\mathrm{n}},$$The resulting image is obtained using Eq. (35):

## (35)

$${\mathrm{reconst}}_{\mathrm{im}}={\mathrm{I}\mathrm{m}}_{\mathrm{a}\mathrm{p}\mathrm{p}}\cup {\mathrm{I}\mathrm{m}}_{\mathrm{det}}.$$Then the inverse of NSCT is calculated. The fused image obtained due to the proposed method, represented edges, contours of roads and buildings, and any structure shapes on the ground, better than other method, namely intensity hue saturation (IHS), PCA-IHS, and high pass filter (HPF). Moreover, the proposed method conserves the spectral information as well. Compared to PCA-NSCT based method, the resulting fusion image was not blurred.

Another injection model in pansharpening application is proposed in Ref. 79. Authors worked on QuickBird and IKONOS-2 imagery. The injection model is built on an adaptive cross gain, i.e., a ratio of local SD. Both images are decomposed using curvelets and then merged together by applying interband structure model. Compared to IHS, ATWT, and HPF, the proposed method exceeds them since it has the best rate according to the indicators: ERGAS, spectral angle mapper, and Q4. The resulted fused image is visually superior and succeeds in producing a tradeoff between different sensors.

The authors of Ref. 80 propose an approach for multisensor image fusion, based on beyond the wavelet transform domain (CT, bandlet, contourlet, and wedgelet). The approach consisted of the following steps: first, the authors decomposed the images into coefficients using beyond wavelet transform. Second, they selected from the two images the low frequency coefficients using maximum local energy (MLE) rule. It is calculated in a local $3\times 3$ sliding window as shown in Eq. (36):

## (36)

$${\mathrm{LBE}}_{\xi}^{l,k}={E}_{1}*{f}_{\xi}^{(0)2}+{E}_{2}*{f}_{\xi}^{(0)2}+\dots {E}_{N}*{f}_{\xi}^{(0)2},$$## (37)

$${\mathrm{SML}}_{x}^{l,k}=\sum _{i=-M}^{M}\sum _{j=-N}^{N}{\nabla}_{\mathrm{ML}}^{2}f(i+p,j+q)\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{for}\text{\hspace{0.17em}\hspace{0.17em}}{\nabla}_{\mathrm{ML}}^{2}f(i,j)\ge T,$$The use of several transforms in an approach could increase the quality of fusion. Works like Zhong et al.^{81} and Zhanga et al.^{82} combined wavelet and curvelet together, to enhance wavelets’ abilities.

Neural networks have been also investigated in RS fusion. MGD were combined with this powerful tool in order to provide more directionalities. In fact, in Ref. 83, a fusion method of SAR images based on pulse couple neural network (PCNN)^{84} is proposed. The highlight of this algorithm is to use the global feature of source images and calculate the gradient information of them after being decomposed into several directions. Then wavelet is applied to decompose the images further while the high frequency coefficients are selected to be the input of PCNN to get a fire map. The max rule is applied to get the final fused image. The framework of the proposed method is detailed in Fig. 17.

PCNN was also used in Ref. 87. The idea is to use the contourlet hidden Markov tree (CHMT) model^{88} to describe the statistical characteristics of contourlet coefficients of RS images. Besides, PCNN is used in this work in order to select the high-frequency directional sub-bands. First, contourlet transform is performed on the registered multisource SAR images and then the contourlet coefficients are trained by expectation maximization algorithm to calculate their edge PDFs. The highest magnitude in low-frequency sub-bands is selected. The high-frequency directional coefficients are updated by multiplying it by its edge PDF. Finally, a new clarity saliency measure is defined and used to fuse the high-frequency sub-bands. This method is compared to others: wavelets hidden Markov trees + PCNN and CT + PCNN. The results show that CHMT-PCNN can capture more directional information. Moreover, it needs less training time even with images abundant in textures. The criteria of assessment used in this work were the mutual information, weighted fusion (evaluates the visual effect), edge-dependent fusion quality index proposed by Ref. 89 (describes edge information in the image), and common information (represents the gradient information) proposed in Ref. 90.

## 4.4.

### Inpainting

The inpainting technique consists of determining the missed area. Since the RS images are sensitive to weather conditions, the inpainting technique helps in correcting images corrupted by clouds.

Indeed, the bandlet is used in Ref. 85 for regenerating a contaminated RS image by cloud. This technique is spatially based, which means that we rely only on the spatial correlation of the corrupted data in SPOT, Landsat, BDOrtho images. The paper presents a geometric reconstruction method based on the geometric data from outside the cloud-contaminated region to fill in the contaminated one. The idea is to detect the boundary points of the inpainting region and try to converge to singularity points according to a specific trajectory. This trajectory follows the geometric flow calculated due to bandlet transform of Eq. (38). The $C({t}_{n+1})$ is the value of the nearest pixel in the border of the inpainting zone and it is calculated with respect to the direction of the bandlet geometric flow:

where $t$ represents time, $\mathrm{\Delta}t$ is the Euclidean distance between $C({t}_{n+1})$ and $C({t}_{n})$. ${C}^{\prime}({t}_{n})$ is the directional derivative of $C$ with respect to the geometric flow of bandlet. The proposed approach is able to perform long region connections. This is due to the fact that bandlet not only pays attention to the geometric flow of the studied region but also to the regularity in contours and edges, which enhance the quality of the constructed image. The render of these results is presented in Fig. 18.In Ref. 91, the authors suggest to conduct an inpainting procedure on QuickBird images. They propose to remove clouds first from low spatial resolution (LR) MS and high spatial resolution (HR) PAN, then to apply mask dodging to extract background image and compute a weight matrix. After that, they process the LR MS image by an adaptive PCA algorithm, where they choose the most representative components based on the CCs. The forward shearlet is then applied on the selected components and the HR PAN image. The shearlets were applied in this case, to enhance details and keep the edges’ information. The low frequencies of the new PAN ${\text{NewLow}}_{\mathrm{PAN}}(i,j)$ are calculated due to the low frequencies of the selected components resulted from the adaptive PCA processed on LR MS. The high frequency coefficients of the new PAN are calculated by enhancing the resulted high frequency of shearlet coefficients, as exhibited in Eq. (39):

## (39)

$${\text{NewHigh}}_{\mathrm{PAN}}(i,j)={\text{High}}_{\mathrm{PAN}}(i,j)+{\text{High}}_{\mathrm{PAN}}(i,j)\phantom{\rule{0ex}{0ex}}\times {W}_{\mathrm{PAN}}(i,j)\times {W}_{\mathrm{MS}}(i,j),$$## 5.

## Discussions

The use of MGD leads to interesting results in the field of RS. We summarized the studied works in Table 2, where we present the methods, the data, and the criteria of assessment. To analyze all these studied cases, we propose to focus on three main axes:

## Table 2

Overview of the methods, data and the criteria of assessment of the studied approaches in the field of remote sensing.

Methods and data | Criteria of assessment | |
---|---|---|

Classification | CT on SAR | Overall accuracy |

WBCT used on LISS II, III, and IV | Overall accuracy, kappa, silhouette coefficient, Davis and Bouldin | |

MCA + CT on AVIRIS and AIRSAR | Overall accuracy, average accuracy and kappa | |

Wavelet + fuzzy classifier on Indian remote sensing and SPOT | β | |

Wavelet used on HYDICE HI and SPOT | Overall accuracy and kappa coefficient | |

Change detection | Contourlet on SAR | Overall accuracy and false-positive rate |

MCA + CT on SAR | Not mentioned | |

Shearlet on HYDICE HI | Not mentioned | |

Contourlet on SAR | PSNR, false-positives index, false-negative index | |

Wavelet on Landsat ETM+ and TM | Overall accuracy and kappa coefficient | |

CT on SAR | Not mentioned | |

Fusion | CT on Landsat | Bias, SD, CC, relative average spectral error and RMSE |

CT, BT, ConT, and WT used on high-spatial resolution | PSNR, MSE, weighted fusion quality index, edge-dependent fusion quality index and SSIM | |

Shearlet on IKONOS | RMSE, ERGAS, Q4 and spectral angle mapper | |

Contourlet on SPOT and ALSAT-2A | High-pass CC, RMSE, and canny edge correspondence | |

Shearlet on high spatial resolution | EN, STD, MSE, PSNR and SSIM | |

CT on Quickbird, IKONOS-2 | ERGAS, spectral angle mapper, Q4 | |

Wavelet + CT on SAR | CC and entropy | |

Wavelet/contourlet+PCNN on SAR | Common information, mutual information, weighted fusion quality index, edge-dependent fusion quality index | |

Inpainting | Bandlet on SPOT, Landsat and BDOrth | Mean error, root mean square error |

Shearlet on Quickbird PAN and MS | CC, ERGAS, spectral angle mapper, and entropy |

For the first axis, the images used in the analyzed cases were very diverse. But, some types of images are more likely to be treated by MGD than others due to their rich content. In fact, MGD are often exploited with high spatial resolution data images like IKONOS, Quickbird, and SAR Images,^{50}^{,}^{80}^{,}^{91} to name a few. Otherwise, they are infrequently used with coarse spatial resolution like MODIS images, where pixels are not pure and edges of regions are not sharp. However, SAR images are largely used because they are high spatial resolution images and are weather and illumination independent. Nevertheless, RS data, in general, need to be geometrically corrected due to sensors’ different ground displacements and need also to be denoised from different types of perturbations using tools such as total variation.^{15}

In the second axis, we present the most used MGD in RS. For instance, we have noticed that the wavelet is still largely used, despite the fact that they failed in representing ${C}^{2}$ edges. This is explained by its high capability in representing textures and extracting spectral-based features. Besides, we found that although the shearlet is built to overcome the weakness of CT and contourlet, the experimentation results using contourlet, for example, outperformed, in some cases, the use of shearlets like in Ref. 91. Moreover, the bandlet was not widely used according to our search, as is the case of the other transforms, due to its high computational time and the high dimension of RS data. It was especially investigated in the context of inpainting since it can draw the geometric flow to follow, to fill in the missed area. The use of MGD in 3-D space is barely absent. This can be explained by the fact that it is difficult to acquire data with high spatial and temporal resolutions, simultaneously, as is the case for the video.^{43}

The third axis is dedicated to treat the limits of MGD. Since MGD are redundant, it was noticed that in some works, the authors have used techniques of dimensionality reduction as in Ref. 91 to reduce the high number of extracted features. The bright side was the fact that this is done without affecting the discriminative power or yielding undesirable spectral distortions which is an important point to take into account when dealing with RS images. In fact, works like Ref. 49 are proposing a time consuming algorithm, which needs to be optimized. Another MGD limit is to find the adequate number of scales decomposition. To ensure the extraction of enough spatial resolution details, the decomposition level must not be too high. In Ref. 92, it was proven that a wide decomposition level affects the processing of high frequencies details. Thus, the resulted image will be sensitive to noise. Taking into consideration the characteristics of MGD subbands, the accuracy of the reconstructed image could be deteriorated if inefficient fusion rules are selected. In fact, we have noticed that in some works such as Miao et al.,^{69} fusing the low frequency of the transform domain using the average law was inefficient. This causes, as a matter of fact, a large detail loss, since the low frequency in the MGD domain contains most of the energy, resulting in decreasing the contrast of the fused image after reconstructing it with the inverse transform of the MGD.^{93}

## 6.

## Conclusion

In this paper, a review of MGD and some of their applications in the RS field are presented. We described how they are useful to preserve object contours and edges and to extract feature content from images. Their common characteristic was to highlight directional analysis and adaptability to the geometry within the image. The resulted basis from these MGD were sometimes redundant in order to provide a robust representation of a given signal without losing the orthogonal/orthonormal basis properties. Nevertheless, researchers are interested in searching beyond orthogonal bases. Some works propose to learn them, some others suggest to design a hybrid basis, combining the learned basis with the predefined ones.

Throughout the state-of-art elaborated from different applications in the RS fields, we have noticed that the MGD confirmed their success by being used in several applications, such as change detection between temporal high spatial resolution data. In addition, they were abundantly used to preserve edges of buildings, rivers, and vehicles, given that they are more likely to enhance details of contours in the image. They are applied as well to reduce noise by extracting high frequencies from the finest scales. The MGD are exploited in fusion applications, too, in order to reconcile between different types of data. Another important side of MGD, when combined with dimensionality reduction methods like PCA, is their capability to keep their discriminative power. This is an interesting point in the case of RS field, where it is crucial to find a compromise between fidelity to data, performance, and dimensionality.

## References

G. Kutyniok and D. Labate, “Introduction to shearlets,” in Shearlets, pp. 1–38, Springer (2012).Google Scholar

E. P. Simoncelli et al., “Shiftable multiscale transforms,” IEEE Trans. Inf. Theory 38(2), 587–607 (1992).http://dx.doi.org/10.1109/18.119725Google Scholar

R. H. Bamberger and M. J. Smith, “A filter bank for the directional decomposition of images: theory and design,” IEEE Trans. Signal Process. 40(4), 882–893 (1992).http://dx.doi.org/10.1109/78.127960Google Scholar

J.-P. Antoine et al., “Image analysis with two-dimensional continuous wavelet transform,” Signal Process. 31(3), 241–272 (1993).SPRODR0165-1684http://dx.doi.org/10.1016/0165-1684(93)90085-OGoogle Scholar

N. Kingsbury, “Complex wavelets for shift invariant analysis and filtering of signals,” Appl. Comput. Harmonic Anal. 10(3), 234–253 (2001).ACOHE91063-5203http://dx.doi.org/10.1006/acha.2000.0343Google Scholar

E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with piecewise c2 singularities,” Commun. Pure Appl. Math. 57(2), 219–266 (2004).CPMAMV0010-3640http://dx.doi.org/10.1002/cpa.v57:2Google Scholar

A. Krizhevsky, I. Sutskever and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, pp. 1097–1105 (2012).Google Scholar

L. Sergey et al., “End-to-end training of deep visuomotor policies,” J. Mach. Learn. Res. 17(39), 1–40 (2016).Google Scholar

G. Masi et al., “Pansharpening by convolutional neural networks,” Remote Sens. 8(7), 594 (2016).RSEND3http://dx.doi.org/10.3390/rs8070594Google Scholar

E. Maggiori et al., “Fully convolutional neural networks for remote sensing image classification,” in IEEE Int. Geoscience and Remote Sensing Symp. , pp. 5071–5074, IEEE (2016).IGRSD20196-2892http://dx.doi.org/10.1109/IGARSS.2016.7730322Google Scholar

M. Xu et al., “Cloud removal based on sparse representation via multitemporal dictionary learning,” IEEE Trans. Geosci. Remote Sens. 54(5), 2998–3006 (2016).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2015.2509860Google Scholar

B. Song et al., “One-class classification of remote sensing images using kernel sparse representation,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 9(4), 1613–1623 (2016).http://dx.doi.org/10.1109/JSTARS.2015.2508285Google Scholar

W. Wu et al., “A new framework for remote sensing image super-resolution: sparse representation-based method by processing dictionaries with multi-type features,” J. Syst. Archit. 64, 63–75 (2016).http://dx.doi.org/10.1016/j.sysarc.2015.11.005Google Scholar

L. I. Rudin, S. Osher and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D: Nonlinear Phenom. 60(1), 259–268 (1992).http://dx.doi.org/10.1016/0167-2789(92)90242-FGoogle Scholar

Q. Yuan, L. Zhang and H. Shen, “Hyperspectral image denoising employing a spectral–spatial adaptive total variation model,” IEEE Trans. Geosci. Remote Sens. 50(10), 3660–3677 (2012).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2012.2185054Google Scholar

D. J. Lary et al., “Machine learning in geosciences and remote sensing,” Geosci. Front. 7(1), 3–10 (2016).http://dx.doi.org/10.1016/j.gsf.2015.07.003Google Scholar

M. Pesaresi, V. Syrris and A. Julea, “Analyzing big remote sensing data via symbolic machine learning,” in Proc. of the 2016 Conf. on Big Data from Space, Santa Cruz de Tenerife, Spain, pp. 15–17 (2016).Google Scholar

D. O. T. Bruno et al., “LBP operators on curvelet coefficients as an algorithm to describe texture in breast cancer tissues,” Expert Syst. Appl. 55, 329–340 (2016).ESAPEH0957-4174http://dx.doi.org/10.1016/j.eswa.2016.02.019Google Scholar

K. G. Krishnan, P. Vanathi and R. Abinaya, “Performance analysis of texture classification techniques using shearlet transform,” in Int. Conf. on Wireless Communications, Signal Processing and Networking, pp. 1408–1412, IEEE (2016).http://dx.doi.org/10.1109/WiSPNET.2016.7566368Google Scholar

A. Uçar, Y. Demir and C. Güzeliş, “A new facial expression recognition based on curvelet transform and online sequential extreme learning machine initialized with spherical clustering,” Neural Comput. Appl. 27(1), 131–142 (2016).http://dx.doi.org/10.1007/s00521-014-1569-1Google Scholar

J. Chaki, R. Parekh and S. Bhattacharya, “Plant leaf recognition using ridge filter and curvelet transform with neuro-fuzzy classifier,” in Proc. of 3rd Int. Conf. on Advanced Computing, Networking and Informatics, pp. 37–44, Springer (2016).Google Scholar

J. Allen, “Short term spectral analysis, synthesis, and modification by discrete Fourier transform,” IEEE Trans. Acoust. Speech Signal Process. 25(3), 235–238 (1997).IETABA0096-3518http://dx.doi.org/10.1109/TASSP.1977.1162950Google Scholar

R. Wilson, A. D. Calway and E. R. Pearson, “A generalized wavelet transform for Fourier analysis: the multiresolution Fourier transform and its application to image and audio signal analysis,” IEEE Trans. Inf. Theory 38(2), 674–690 (1992).IETTAW0018-9448http://dx.doi.org/10.1109/18.119730Google Scholar

L. Jacques et al., “A panorama on multiscale geometric representations, intertwining spatial, directional and frequency selectivity,” Signal Process. 91(12), 2699–2730 (2011).SPRODR0165-1684http://dx.doi.org/10.1016/j.sigpro.2011.04.025Google Scholar

D. L. Donoho and M. R. Duncan, “Digital curvelet transform: strategy, implementation, and experiments,” Proc. SPIE 4056, 12–30 (2000).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.381679Google Scholar

E. J. Candès and D. L. Donoho, “Ridgelets: a key to higher-dimensional intermittency?,” Philos. Trans. R. Soc. A: Math. Phys. Eng. Sci. 357(1760), 2495–2509 (1999).http://dx.doi.org/10.1098/rsta.1999.0444Google Scholar

E. Candes et al., “Fast discrete curvelet transforms,” Multiscale Model. Simul. 5(3), 861–899 (2006).http://dx.doi.org/10.1137/05064182XGoogle Scholar

M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. 14(12), 2091–2106 (2005).http://dx.doi.org/10.1109/TIP.2005.859376Google Scholar

A. L. Da Cunha, J. Zhou and M. N. Do, “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Trans. Image Process. 15(10), 3089–3101 (2006).http://dx.doi.org/10.1109/TIP.2006.877507Google Scholar

W.-Q. Lim, “The discrete shearlet transform: a new directional transform and compactly supported shearlet frames,” IEEE Trans. Image Process. 19(5), 1166–1180 (2010).http://dx.doi.org/10.1109/TIP.2010.2041410Google Scholar

B. Biswas, A. Dey and K. N. Dey, “Remote sensing image fusion using hausdorff fractal dimension in shearlet domain,” in Int. Conf. on Advances in Computing, Communications and Informatics, pp. 2180–2185, IEEE (2015).http://dx.doi.org/10.1109/ICACCI.2015.7275939Google Scholar

G. Easley, D. Labate and W.-Q. Lim, “Sparse directional image representations using the discrete shearlet transform,” Appl. Comput. Harmonic Anal. 25(1), 25–46 (2008).ACOHE91063-5203http://dx.doi.org/10.1016/j.acha.2007.09.003Google Scholar

X. Luo, Z. Zhang and X. Wu, “A novel algorithm of remote sensing image fusion based on shift-invariant shearlet transform and regional selection,” AEU—Int. J. Electron. Commun. 70(2), 186–197 (2016).http://dx.doi.org/10.1016/j.aeue.2015.11.004Google Scholar

D. L. Donoho et al., “Wedgelets: nearly minimax estimation of edges,” Ann. Stat. 27(3), 859–897 (1999).http://dx.doi.org/10.1214/aos/1018031261Google Scholar

G. Lebrun, “Ondelettes géométriques adaptatives: vers une utilisation de la distance géodésique,” PhD Thesis, Université de Poitiers (2009).Google Scholar

V. Chandrasekaran et al., “Surflets: a sparse representation for multidimensional functions containing smooth discontinuities,” in Proc. Int. Symp. on Information Theory, p. 563, IEEE (2004).http://dx.doi.org/10.1109/ISIT.2004.1365602Google Scholar

E. Le Pennec and S. Mallat, “Sparse geometric image representations with bandelets,” IEEE Trans. Image Process. 14(4), 423–438 (2005).http://dx.doi.org/10.1109/TIP.2005.843753Google Scholar

H. Blum, “Biological shape and visual science (part I),” J. Theor. Biol. 38(2), 205–287 (1973).JTBIAP0022-5193http://dx.doi.org/10.1016/0022-5193(73)90175-6Google Scholar

A. Mosleh, N. Bouguila and A. B. Hamza, “Image and video spatial super-resolution via bandlet-based sparsity regularization and structure tensor,” Signal Process. Image Commun. 30, 137–146 (2015).SPICEF0923-5965http://dx.doi.org/10.1016/j.image.2014.10.010Google Scholar

S. Mallat and G. Peyré, “Orthogonal bandlet bases for geometric images approximation,” Commun. Pure Appl. Math. 61(9), 1173–1212 (2008).CPMAMV0010-3640http://dx.doi.org/10.1002/cpa.v61:9Google Scholar

G. Peyré and S. Mallat, “Orthogonal bandelet bases for geometric images approximation,” Commun. Pure Appl. Math. 61(9), 1173–1212 (2008).CPMAMV0010-3640http://dx.doi.org/10.1002/cpa.v61:9Google Scholar

L. Dettori and L. Semler, “A comparison of wavelet, ridgelet, and curvelet-based texture classification algorithms in computed tomography,” Comput. Biol. Med. 37(4), 486–498 (2007).CBMDAW0010-4825http://dx.doi.org/10.1016/j.compbiomed.2006.08.002Google Scholar

S. Dubois, R. Péteri and M. Ménard, “Analyse de textures dynamiques par décompositions spatio-temporelles: application à l’estimation du mouvement global,” in Conf. CORESA 2010, p. XX (2010).Google Scholar

S. Mallat, “Geometrical grouplets,” Appl. Comput. Harmonic Anal. 26(2), 161–180 (2009).ACOHE91063-5203http://dx.doi.org/10.1016/j.acha.2008.03.004Google Scholar

P. G. Casazza, G. Kutyniok and F. Philipp, “Introduction to finite frame theory,” in Finite Frames, pp. 1–53, Springer (2013).Google Scholar

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).http://dx.doi.org/10.1109/TIT.2006.871582Google Scholar

J.-L. Starck, F. Murtagh and J. M. Fadili, Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity, Cambridge University Press (2010).Google Scholar

K. Venkateswaran, N. Kasthuri and R. Alaguraja, “Performance comparison of wavelet and contourlet frame based features for improving classification accuracy in remote sensing images,” J. Indian Soc. Remote Sens. 43(4), 729–737 (2015).http://dx.doi.org/10.1007/s12524-015-0461-5Google Scholar

B. U. Shankar, S. K. Meher and A. Ghosh, “Wavelet-fuzzy hybridization: feature-extraction and land-cover classification of remote sensing images,” Appl. Soft Comput. 11(3), 2999–3011 (2011).http://dx.doi.org/10.1016/j.asoc.2010.11.024Google Scholar

J. Liu, K. A. Scott and P. Fieguth, “Curvelet based feature extraction of dynamic ice from SAR imagery,” in IEEE Int. Geoscience and Remote Sensing Symp., pp. 3462–3465, IEEE (2015).http://dx.doi.org/10.1109/IGARSS.2015.7326565Google Scholar

Z. Xue et al., “Spectral–spatial classification of hyperspectral data via morphological component analysis-based image separation,” IEEE Trans. Geosci. Remote Sens. 53(1), 70–84 (2015).http://dx.doi.org/10.1109/TGRS.2014.2318332Google Scholar

J. M. Bioucas-Dias and M. A. Figueiredo, “Alternating direction algorithms for constrained sparse regression: application to hyperspectral unmixing,” in 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, pp. 1–4, IEEE (2010).http://dx.doi.org/10.1109/WHISPERS.2010.5594963Google Scholar

X. Xu, J. Li and M. Dalla Mura, “Remote sensing image classification based on multiple morphological component analysis,” in IEEE Int. Geoscience and Remote Sensing Symp., pp. 2596–2599, IEEE (2015).http://dx.doi.org/10.1109/IGARSS.2015.7326343Google Scholar

L. Zhang et al., “On combining multiple features for hyperspectral remote sensing image classification,” IEEE Trans. Geosci. Remote Sens. 50(3), 879–893 (2012).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2011.2162339Google Scholar

L. Zhang et al., “A pixel shape index coupled with spectral information for classification of high spatial resolution remotely sensed imagery,” IEEE Trans. Geosci. Remote Sens. 44(10), 2950–2961 (2006).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2006.876704Google Scholar

L. Zhu et al., “A novel change detection method based on high-resolution SAR images for river course,” Optik—Int. J. Light Electron Opt. 126(23), 3659–3668 (2015).http://dx.doi.org/10.1016/j.ijleo.2015.08.224Google Scholar

E. M. Dominguez et al., “High resolution airborne SAR image change detection in urban areas with slightly different acquisition geometries,” in Int. Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences (2015).Google Scholar

H. Zhou et al., “Shearlet transform based anomaly detection for hyperspectral image,” Proc. SPIE 8419, 84190I (2012).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.978636Google Scholar

H. Kwon and N. M. Nasrabadi, “Kernel RX-algorithm: a nonlinear anomaly detector for hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens. 43(2), 388–397 (2005).http://dx.doi.org/10.1109/TGRS.2004.841487Google Scholar

S. Li, L. Fang and H. Yin, “Multitemporal image change detection using a detail-enhancing approach with nonsubsampled contourlet transform,” IEEE Trans. Geosci. Remote Sens. 9(5), 836–840 (2012).http://dx.doi.org/10.1109/LGRS.2011.2182632Google Scholar

T. Celik, “Unsupervised change detection in satellite images using principal component analysis and-means clustering,” IEEE Trans. Geosci. Remote Sens. 6(4), 772–776 (2009).http://dx.doi.org/10.1109/LGRS.2009.2025059Google Scholar

T. Celik, “Multiscale change detection in multitemporal satellite images,” IEEE Trans. Geosci. Remote Sens. 6(4), 820–824 (2009).http://dx.doi.org/10.1109/LGRS.2009.2026188Google Scholar

T. Celik, “A Bayesian approach to unsupervised multiscale change detection in synthetic aperture radar images,” Signal Process. 90(5), 1471–1485 (2010).SPRODR0165-1684http://dx.doi.org/10.1016/j.sigpro.2009.10.018Google Scholar

C. Ding and X. He, “K-means clustering via principal component analysis,” in Proc. of the Twenty-First Int. Conf. on Machine Learning, p. 29, ACM (2004).Google Scholar

Y. O. Ouma, S. Josaphat and R. Tateishi, “Multiscale remote sensing data segmentation and post-segmentation change detection based on logical modeling: theoretical exposition and experimental results for forestland cover change analysis,” Comput. Geosci. 34(7), 715–737 (2008).http://dx.doi.org/10.1016/j.cageo.2007.05.021Google Scholar

A. Schmitt, B. Wessel and A. Roth, “Curvelet-based change detection on SAR images for natural disaster mapping,” Photogramm. Fernerkundung Geoinf. 2010(6), 463–474 (2010).http://dx.doi.org/10.1127/1432-8364/2010/0068Google Scholar

C. Pohl and J. van Genderen, “Structuring contemporary remote sensing image fusion,” Int. J. Image Data Fusion 6(1), 3–21 (2015).http://dx.doi.org/10.1080/19479832.2014.998727Google Scholar

M. Choi et al., “Fusion of multispectral and panchromatic satellite images using the curvelet transform,” IEEE Trans. Geosci. Remote Sens. Lett. 2(2), 136–140 (2005).http://dx.doi.org/10.1109/LGRS.2005.845313Google Scholar

Q.-G. Miao et al., “A novel algorithm of image fusion using shearlets,” Opt. Commun. 284(6), 1540–1547 (2011).OPCOB80030-4018http://dx.doi.org/10.1016/j.optcom.2010.11.048Google Scholar

A.-U. Moonon, J. Hu and S. Li, “Remote sensing image fusion method based on nonsubsampled shearlet transform and sparse representation,” Sens. Imaging 16(1), 1–18 (2015).http://dx.doi.org/10.1007/s11220-014-0103-yGoogle Scholar

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15(12), 3736–3745 (2006).http://dx.doi.org/10.1109/TIP.2006.881969Google Scholar

S. Li and B. Yang, “A new pan-sharpening method using a compressed sensing technique,” IEEE Trans. Geosci. Remote Sens. 49(2), 738–746 (2011).http://dx.doi.org/10.1109/TGRS.2010.2067219Google Scholar

M. Aharon, M. Elad and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).http://dx.doi.org/10.1109/TSP.2006.881199Google Scholar

S. Zheng et al., “Remote sensing image fusion using multiscale mapped LS-SVM,” IEEE Trans. Geosci. Remote Sens. 46(5), 1313–1322 (2008).http://dx.doi.org/10.1109/TGRS.2007.912737Google Scholar

X. Otazu et al., “Introduction of sensor spectral response into image fusion methods. application to wavelet-based methods,” IEEE Trans. Geosci. Remote Sens. 43(10), 2376–2385 (2005).http://dx.doi.org/10.1109/TGRS.2005.856106Google Scholar

S. Rahmani et al., “An adaptive IHS pan-sharpening method,” IEEE Trans. Geosci. Remote Sens. 7(4), 746–750 (2010).http://dx.doi.org/10.1109/LGRS.2010.2046715Google Scholar

A. Garzelli and F. Nencini, “Pan-sharpening of very high resolution multispectral images using genetic algorithms,” Int. J. Remote Sens. 27(15), 3273–3292 (2006).IJSEDK0143-1161http://dx.doi.org/10.1080/01431160600554991Google Scholar

S. Ourabia and Y. Smara, “A new pansharpening approach based on nonsubsampled contourlet transform using enhanced PCA applied to SPOT and ALSAT-2A satellite images,” J. Indian Soc. Remote Sens. 44(5), 665–674 (2016).http://dx.doi.org/10.1007/s12524-016-0554-9Google Scholar

A. Garzelli et al., “Multiresolution fusion of multispectral and panchromatic images through the curvelet transform,” in Int. Geoscience and Remote Sensing Symp., Vol. 4, p. 2838 (2005).Google Scholar

H. Lu, L. Zhang and S. Serikawa, “Maximum local energy: an effective approach for multisensor image fusion in beyond wavelet transform domain,” Comput. Math. Appl. 64(5), 996–1003 (2012).http://dx.doi.org/10.1016/j.camwa.2012.03.017Google Scholar

H. Zhong, “Image processing of remote sensor using lifting wavelet and curvelet transform,” Int. J. Control Autom. 9(3), 215–226 (2016).http://dx.doi.org/10.14257/ijcaGoogle Scholar

X. Zhanga, P. Huang and P. Zhou, “Data fusion of multiple polarimetric sar images based on combined curvelet and wavelet transform,” in 1st Asian and Pacific Conf. on Synthetic Aperture Radar, pp. 225–228, IEEE (2007).http://dx.doi.org/10.1109/APSAR.2007.4418595Google Scholar

S. Cheng, M. Qiguang and X. Pengfei, “A novel algorithm of remote sensing image fusion based on Shearlets and PCNN,” Neurocomputing 117, 47–53 (2013).NRCGEO0925-2312http://dx.doi.org/10.1016/j.neucom.2012.10.025Google Scholar

R. P. Broussard et al., “Physiologically motivated image fusion for object detection using a pulse coupled neural network,” IEEE Trans. Neural Networks 10(3), 554–563 (1999).ITNNEP1045-9227http://dx.doi.org/10.1109/72.761712Google Scholar

A. Maalouf et al., “A bandelet-based inpainting technique for clouds removal from remotely sensed images,” IEEE Trans. Geosci. Remote Sens. 47(7), 2363–2371 (2009).http://dx.doi.org/10.1109/TGRS.2008.2010454Google Scholar

M. Li, S. C. Liew and L. K. Kwoh, “Automated production of cloud-free and cloud shadow-free image mosaics from cloudy satellite imagery,” in Proc. of the XXth ISPRS Congress, pp. 12–13 (2004).Google Scholar

S. Yang, M. Wang and L. Jiao, “Contourlet hidden Markov tree and clarity-saliency driven PCNN based remote sensing images fusion,” Applied Soft Comput. 12(1), 228–237 (2012).SPRODR0165-1684http://dx.doi.org/10.1016/j.sigpro.2009.04.027Google Scholar

L. Jiao and Q. Sun, “Advances and perspective on image perception and recognition in multiscale transform domains,” Chin. J. Comput. 29(2), 177 (2006)Google Scholar

G. Piella and H. Heijmans, “A new quality metric for image fusion,” in Proc. Int. Conf. on Image Processing, Vol. 3, p. III–173, IEEE (2003).http://dx.doi.org/10.1109/ICIP.2003.1247209Google Scholar

V. Petrovic and C. Xydeas, “Objective image fusion performance characterisation,” in 10th IEEE Int. Conf. on Computer Vision, Vol. 1 and 2, pp. 1866–1871, IEEE (2005).http://dx.doi.org/10.1109/ICCV.2005.175Google Scholar

C. Shi et al., “Pan-sharpening algorithm to remove thin cloud via mask dodging and nonsampled shift-invariant shearlet transform,” J. Appl. Remote Sens. 8(1), 083658 (2014).http://dx.doi.org/10.1117/1.JRS.8.083658Google Scholar

S. Li, B. Yang and J. Hu, “Performance comparison of different multi-resolution transforms for image fusion,” Inf. Fusion 12(2), 74–84 (2011).http://dx.doi.org/10.1016/j.inffus.2010.03.002Google Scholar

Y. Liu, S. Liu and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).http://dx.doi.org/10.1016/j.inffus.2014.09.004Google Scholar

## Biography

**Mariem Zaouali** received her engineering degree in 2013 from the National Institute of Applied Sciences and Technology (INSAT) in Tunisia. She is currently working toward her PhD within LIMTIC Laboratory at (ISI) in Tunisia. She is working on extracting different types of descriptors from remote sensing images using multiscale geometric decomposition.

**Sonia Bouzidi** received her PhD and master’s degrees from the University Paris 7 Jussieu in France. She carried out her research at INRIA. She started as a professor-researcher in IUP of Evry University (Paris) in 2001. Since 2003, she has been a professor-researcher at INSAT (Tunisia). She is member of LIMTIC Research Laboratory at ISI. Her main activity is about remote sensing image processing.

**Ezzeddine Zagrouba** received his HDR from FST/University Tunis ElManar and his PhD and engineering degree from the Polytechnic National Institute of Toulouse (ENSEEIHT/INPT) in France. He is a professor at the Higher Institute of Computer Science (ISI). He is a vice president of the Virtual University of Tunis and the director of LIMTIC Research Laboratory at ISI. His main activity is focused on intelligent imaging and computer vision, and he is a vice president of the research association ArtsPi.