Multiresolution image fusion also known as pan-sharpening aims to include spatial information from a high-resolution image, e.g., panchromatic or synthetic aperture radar (SAR) image, into a low-resolution image, e.g., multispectral or hyperspectral image, while preserving spectral properties of a low-resolution image. A large number of algorithms and methods to solve this problem were introduced during the last two decades, which can be divided into two large groups based on a linear spectral transformation followed by a component substitution (CS)1 and a spatial frequency decomposition usually performed by means of high-pass filtering (HPF)2,3 or multiresolution analysis.4,5,6 Sometimes it is quite difficult to orient between all these methods though some classification attempts were already performed.67.–8 We propose to look at these methods from a signal processing view. This type of analysis allowed us to recognize similarities and differences of various methods quite easily and thus perform a systematic classification of most known multiresolution image fusion approaches and methods.9 Moreover, this analysis resulted in a general fusion filtering framework (GFF) for a multiresolution image fusion. Sometimes state-of-the-art methods are quite computer time consuming, thus restricting their usage in praxis. To reduce computation time, a simple and fast variant of the GFF method further called a high-pass filtering method (HPFM) is proposed in this paper. It performs filtering in signal domain and thus avoids time-consuming FFT computations.
In parallel to pan-sharpening methods development, many attempts were undertaken to assess quantitatively their quality usually using measures originating from image processing such as mean square error, cross-correlation (CC), structural similarity image (SSIM) index,10 Wald’s protocol,11 and finally recently proposed joint measures: product of spectral measure based on SSIM and spatial measure based on CC12 and quality with no reference (QNR) measure.13 Some comparisons are presented in Refs. 14 and 15. Recently a statistical evaluation of most popular pan-sharpening quality assessment measures was performed in Refs. 16, 17, and 18. To measure two different image properties such as spectral and spatial quality, at least two measures are needed, which makes the task of ranking different fusion methods not easy. Following these results, a new joint quality measure (JQM) based both on spectral and spatial quality measures was proposed. This measure was enhanced by proposing a practical normalization of measures and successfully used for a quantitative fusion quality assessment in this paper.
The paper is organized in the following way. First, the four pan-sharpening methods—GFF, HPFM, CS, and Ehlers fusion method—are introduced. Then, a new JQM based on both spectral and spatial quality measures is described. Finally, experiments with a very-high-resolution satellite optical remote sensing WorldView-2 (WV-2) data are performed followed by discussion and conclusions.
In this section the following four pan-sharpening methods are described: GFF, HPFM, CS, and Ehlers fusion method.
General Fusion Filtering
Let us denote by a low-resolution image, which can be a multispectral/hyperspectral or any other image, with the number of bands equal to and by pan a high-resolution image, e.g., panchromatic band, intensity image of synthetic aperture radar (SAR), or any other image. A lot of existing multiresolution methods or algorithms can be seen as an implementation of a general fusion framework:
Each band is processed independently. Further indices are omitted intentionally for the sake of clarity. First and third steps can be included in the fusion step depending on the method. Usually, is a bilinear (BIL) or cubic convolution (CUB) interpolation and is a linear or other function of images. In the following we formulate a GFF fusion method including interpolation and fusion in one step.
In order to preserve spectral properties of a low-resolution image ms, one should add only high-frequency information from a high-resolution image pan, thus preventing mixing of low-frequency information of pan with a low-resolution image. The natural way to do it is in a spectral or Fourier domain (signal processing view).
First, both images are transformed into spectral/Fourier space and . Then, high-frequency components are extracted from PAN (solid line, Fig. 1) and added to zero padded spectrum of MS (dotted line, Fig. 1). The formula is written as1) for a low-pass filter (LPF) as
Finally, the inverse Fourier transform delivers a fused image with an enhanced spatial resolution
Thus GFF method performs image fusion in Fourier domain. Due to known equivalence formula in signal and spectral domains and assuming that interpolation is performed in a signal domain, we can rewrite Eqs. (1) to (3) as
Equation (4) defines a fusion function introduced above and helps to better understand the relation of the proposed method to already known methods.9 Here we have to note that Eq. (4) is not the same as GFF due to different interpolation methods used.
So the only parameter to be selected is a cutoff frequency parameter controlling the amount of filtering. In this paper a Gaussian filter is used.
Of course, any other filter, e.g., Butterworth, can be used. Our experience showed no significant influence of a filter type on the fusion quality. Thus the GFF method depends only on the filtering parameter . We will see in Sec. 4.4 that the optimal parameter for is for WV-2 satellite data, which is well supported by other studies (e.g., Ref. 19) and existing experience. Further studies on more data are planned. Moreover, we have to note that the band-dependent filtering parameter can be easily implemented and can increase the quality of pan-sharpening as already known from other studies.20
High-Pass Filtering Method
As seen from Eq. (4), it is possible to avoid Fourier transform in the GFF method in order to reduce computation time. Thus a simple and fast variant of the GFF method, further called HPFM, can be derived as follows:
1. Instead of zero-padding in spectral domain, an interpolation of multispectral image in signal domain can be performed.
2. High-pass filter HPF or low-pass filter LPF with desired properties can be built up in a spectral domain and then transformed into signal domain hpf or lpf, respectively.
3. Equation (4) is applied for image fusion in signal domain using convolution with a designed filter.
4. Finally histogram matching is performed.
Proposed method differs from a method introduced in Ref. 2 in the way the filter is constructed and histogram matching performed. Usually a simple boxcar filter is used in signal domain, thus making a fine selection of amount of filtering difficult. For the HPFM the same filters as for GFF method, e.g., Eq. (5), can be used. Here we have to note that due to addition/subtraction operations in Eq. (4), the HPFM can be interpreted as an additive model. Multiplicative model can be written as
Thus the HPFM method depends on the filtering parameter , model, and interpolation method.
CS method is one of the simplest and, maybe, oldest image fusion methods. Here a short description following the recent enhancement of intensity hue saturation transformation method1 is given. Under the assumption that msi and pan are highly correlated, one can calculate intensity or mean as
Now CS fusion under the assumption of Eq. (7) can be written as
Similarly as for the HPFM, this way of fusion can be seen as an additive model. Multiplicative model can be written as
Thus a selection of model and interpolation method is required for this method.
For a detailed description of the Ehlers fusion method, see Refs. 3 and 19. Its relation to the GFF method is discussed in Ref. 9. The Ehlers method depends on three parameters: interpolation method (cubic convolution was used as recommended by the authors), cutoff frequency for high-pass filtering of panchromatic image, and cutoff frequency for low-pass filtering of intensity calculated according to Eq. (7). For Ehlers fusion in this paper again Gaussian filter [Eq. (5)] was used for both types of filtering. Finally, we have to note that our implementation of the method further called EhlersX in Xdibias software environment was used. The filtering for this method is implemented in a spectral domain. Validation of our implementation with the original Ehlers software implementation in MATLAB resulted in comparable results. As for other filtering methods GFF and HPFM, the optimal parameters for and are for WV-2 data, which coincides well with the experience of the authors of the method.
WV-2 satellite remote sensing data over the city of Munich in south Germany were used in our experiments. For scene details see Table 1.
Scene parameters for WorldView-2 data over Munich city.
|Image date||July 12, 2010|
|Image time (local)||10:30:17|
|Look angle||5.2 deg left|
|Resolution PAN (m)||0.5|
|Resolution MS (m)||2.0|
The quality of pan-sharpening is usually measured by spectral and/or spatial quality measures to cover both attributes of a processing result. Measures calculated for the whole image are called global. Window-based measures are calculated in selected areas or, e.g., using sliding window and can distinguish image parts with a different quality. The latter measures are outside the scope of this paper.
Many spectral quality measures already have been proposed in the literature, e.g., Refs. 5 and 15. Recent comparison17,18 showed that the correlation (CORR) between original spectral bands and corresponding low-pass filtered and subsampled pan-sharpened bands is one of the best. It allows us to measure a spectral quality or preservation of a pan-sharpening method for individual bands or by averaging for all bands
It has high values (optimal value is 1) for a good spectral preservation and low values for low spectral characteristics preservation. This measure alone is not able to assess the quality of fusion result, because it is calculated only in reduced image resolution/scale.
It exhibits high values (optimal value 1) for high spatial quality and low values for low spatial quality. Here we have to note that due to different width of spectral spectra of multispectral and panchromatic data, the CC as proposed in Ref. 12 may not be sufficient because of possible mean and standard deviation differences. SSIM allows us to account for such differences much better.
Joint Quality Measure
In an ideal case, pan-sharpening method should exhibit both high spectral and spatial quality measure values. But it is not possible practically, because, e.g., for GFF method (also valid for other filtering methods) different parameters (amount of filtering) lead to different image qualities. Thus a larger high-pass filtering parameter value will lead to a higher spectral quality at the same time reducing spatial quality and vice versa [see Fig 2(b)]. None of the known separate quality measures can fulfill this requirement as a sole measure.14 Thus, a JQM could be helpful to achieve optimal parameter selection or best trade-off between spectral and spatial quality or find the best method for a particular application.
One could think of a simple average or product12 of both measures: one for spectral and another one for spatial (in this particular case SSIM for spectral quality and CC for spatial quality). In such a way derived quality measure can be easily biased due to different range values of the separate measures. Moreover, CC for spatial quality can be insufficient for data exhibiting different spectral properties as already stated in Sec. 4.2.
Due to different ranges of two measures (SSIM is usually lower than CORR as can be seen in Fig 2(b)] we propose here to normalize one of the measures before averaging (producing a joint measure) using a linear scaling transform4.4). Similarly other minimum and maximum values are defined. We have to note that mean and standard deviation values (standardization) can be used for normalization instead of extreme range values with a risk of errors due to insufficient size of number of samples.
Now the averaging of corresponding spectral and normalized spatial measures
How to Normalize Quality Measures?
For the normalization as proposed in Eq. (12), we need to define four parameters: minimum and maximum values of two variables CORR and SSIM. This can be derived in different ways, e.g., from existing experience (subjective) or experiments with a lot of different methods exhibiting different fusion quality, and thus covering the whole range of spectral and spatial qualities. We follow the latter approach with the following optimizations. Due to diversity of content in an image (remote sensing scene) we propose a data-driven estimation of extreme values. To reduce computation time, we propose to use a single method: a filtering-based method, e.g., HPFM, with two different parameters producing best and worst qualities, e.g., small parameter value (0.05) for high spatial and at the same time low spectral quality and large parameter value (0.7) for low spatial and at the same time high spectral quality [see Fig. 2(b)]. Thus only two runs of a very fast HPFM method deliver the required four parameters. JQM around 0.15 suggests an optimal parameter value for HPFM [Fig. 2(a)].
How to Use JQM for Comparison of Different Methods?
JQM introduced in Eq. (13) requires knowledge of normalized , which is not available for all methods. As shown in the previous section, it is even not necessary because these extreme values can be derived using a reference method, e.g., HPFM. Then we can rewrite Eq. (13) using Eq. (12)
Applying extreme value ranges as proposed above (Sec. 4.4) results in the following and values (see Table 2), which are data dependent but are very easy and fast to derive. Thus proposed JQM [Eq. (14)] can be used for estimating quality of any pan-sharpening method on these data.
Estimated values of A and B according to Eqs. (15) and (16) from the proposed extreme range values used for calculation of JQM.
Additional margin of 10% added to these extreme values (avoiding out-of-range values) ensures the appropriate comparison or quality assessment of other pan-sharpening methods. Thus the proposed normalization of quality measures is data dependent, which means it should be performed individually for each image/scene. But this is not a really great drawback because the normalization should be estimated only once per scene and a fast fusion method such as HPFM with different parameter settings can be used.
Here we have to note that there exists one more JQM called QNR (Ref. 13), which is not included in this study, but is one of the topics of the next paper.
In this section we shall compare different pan-sharpening methods using JQM as proposed in the previous section. Two more known fusion methods were added to the comparison: Amélioration de la résolution spatiale par injection de structures (ARSIS) (Ref. 21) variant, À Trous wavelet transform model for wavelet fusion4 (implementation of Ref. 22) and Gram-Schmidt (GS) spectral sharpening implemented in ENVI Software with an averaging method for low-resolution file calculation and bilinear resampling. Results of quality assessment for various methods are presented in the next two sections by using values of and from Table 2. Here we have to note that for the quality assessment only bands spectrally overlapping with panchromatic band are used to assure physically justified results.8 Thus, the following three bands were excluded from further analysis: coastal, NIR1, and NIR2.
Comparison of Interpolation Methods
In many papers, in addition to the assessment of fusion methods, quality measures for only interpolation (resampling of low-resolution image to high-resolution image) are also referred. Comparison of the four most popular (fast enough for operational applications) interpolation methods is presented in Table 3 and visualized in Fig. 3.
Quality measures of various interpolation methods for WorldView-2 Munich data.
BIL and CUB interpolation methods exhibit the best spectral quality (CORR), which is supported by existing experience. More surprising are results of spatial quality (SSIM). Here the best is nearest neighbor followed by BIL. JQM suggests BIL as interpolation method for pan-sharpening, which agrees well with existing experience. Here we have to note that SSIM is designed for quality assessment of fusion methods whereas interpolation method is not a fusion method and contains no information from panchromatic band. Thus the results of SSIM and JQM for interpolation methods should be treated cautiously and cannot be compared directly with the results of the next section.
Comparison of Pan-Sharpening Methods
Quality measures and computation time of various pan-sharpening methods for WorldView-2 Munich data on Intel Core 2 Quad CPU Q9450 at 2.66 GHz.
|Method||Model||Interpolation method||fcutoff_HR||fcutoff_I||CORR||SSIM||JQM||Time (s)|
|1 GFF||—||Zero padding||0.15||—||0.9782||0.8362||0.9828||85.1|
|3 HPFM||Additive||Cubic convolution||0.15||—||0.9873||0.8318||0.9859||22.1|
|5 HPFM||Multiplicative||Cubic convolution||0.15||—||0.9878||0.8346||0.9871||22.7|
|10 EhlersX||—||Cubic convolution||0.15||0.15||0.9450||0.8491||0.9706||160.4|
We see that HPFM method [all four parameter settings (methods 2 to 5)] outperforms its parentage GFF method (method 1). This is because zero padding interpolation smoothens multispectral image to a much greater extent than, e.g., bilinear interpolation (compare CORR values in Table 3). Multiplicative model is better than additive model, whereas the difference between BIL and CUB interpolation methods is negligible. Optimal filtering parameter 0.15 [see Fig. 2(a)] was used for all filtering methods in this paper. Existing experience supports this value even for other data and sensors. Following two parameters 0.05 (method 6) and 0.7 (method 7) of HPFM were used to derive normalizing constants and , because the results exhibit extreme spectral and spatial qualities as seen in Fig. 4(b). Their JQM is comparable with that of ARSIS method known for its high spatial quality. The lowest JQM as expected exhibit both CS methods and GS sharpening. Ehlers fusion finds its place somewhere between this group and ARSIS. These observations are fully supported by existing experience and visual analysis (Fig. 5).
Additionally computation time of the methods is presented in Table 4 for multispectral image size , panchromatic image size , and for all eight bands of WV-2 Munich data. We see that the proposed HPFM pan-sharpening method is more than four times faster than the parentage GFF fusion method. Moreover, the speed of the proposed method HPFM is comparable with that of classical CS and GS methods. Ehlers and ARSIS fusion methods are about two times slower than GFF and thus are less suitable for operational applications.
A simplified version of a GFF method—a fast, simple, and good HPFM—is introduced with a potential for operational remote sensing applications. It performs filtering in signal domain, thus avoiding time-consuming FFT computations.
A new JQM based on both spectral and spatial quality measures (carefully selected from previous studies) is used to assess the fusion quality of six pan-sharpening methods—GFF, HPFM (with different parameter settings), CS, GS, Ehlers fusion, and ARSIS pan-sharpening methods—on a very high-resolution WV-2 satellite optical remote sensing data. Only spectral bands whose spectral spectrum is overlapping with a spectrum of panchromatic band were used for quality assessment to ensure physically consistent evaluation. The proposed JQM allows a comfortable ranking of different methods using a sole quality measure.
Experiments showed that the HPFM pan-sharpening method exhibits the best fusion quality among several popular methods tested (even better than its parentage method GFF) and at the same time more than four times lower computation time than GFF method. Thus HPFM method, being competitive in speed with known fast methods such as CS and GS but exhibiting a much higher quality, is a good candidate for operational applications. A new quality measure JQM allowed the correct ranking of different pan-sharpening methods, which is consistent with existing experience and visual analysis, thus claiming to be a suitable quality measure for selecting the parameters of a particular fusion method and comparison of different methods.
We would like to thank DigitalGlobe and European Space Imaging for the collection and provision of WorldView-2 scene over the Munich city.
J. Hillet al., “A local correlation approach for the fusion of remote sensing data with different spatial resolution in forestry applications,” in Proc. of Int. Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, pp. 167–174, ISPRS, Valladolid, Spain (1999).Google Scholar
S. KlonusM. Ehlers, “Image fusion using the Ehlers spectral characteristics preservation algorithm,” GIsci. Rem. Sens. 44(2), 93–116 (2007), http://dx.doi.org/10.2747/1548-1603.44.2.93.1548-1603Google Scholar
B. Aiazziet al., “Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis,” IEEE Trans. Geosci. Rem. Sens. 40(10), 2300–2312 (2002), http://dx.doi.org/10.1109/TGRS.2002.803623.IGRSD20196-2892Google Scholar
L. Alparoneet al., “Comparison of pansharpening algorithms: outcome of the 2006 GRS-S data-fusion contest,” IEEE Trans. Geosci. Rem. Sens. 45(10), 3012–3021 (2007), http://dx.doi.org/10.1109/TGRS.2007.904923.IGRSD20196-2892Google Scholar
B. Aiazziet al., “A comparison between global and context-adaptive pansharpening of multispectral images,” IEEE Geosci. Rem. Sens. Lett. 6(2), 302–306 (2009), http://dx.doi.org/10.1109/LGRS.2008.2012003.IGRSBY1545-598XGoogle Scholar
C. Thomaset al., “Synthesis of multispectral images to high spatial resolution: a critical review of fusion methods based on remote sensing physics,” IEEE Trans. Geosci. Remote Sens. 46(5), 1301–1312 (2008), http://dx.doi.org/10.1109/TGRS.2007.912448.IGRSD20196-2892Google Scholar
G. PalubinskasP. Reinartz, “Multi-resolution, multi-sensor image fusion: general fusion framework,” in Proc. of Joint Urban Remote Sensing Event, pp. 313–316, IEEE, Township, NJ (2011).Google Scholar
Z. Wanget al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004), http://dx.doi.org/10.1109/TIP.2003.819861.IIPRE41057-7149Google Scholar
L. WaldT. RanchinM. Mangolini, “Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images,” Photogramm. Eng. Rem. Sens. 63(6), 691–699 (1997).PGMEA90099-1112Google Scholar
C. Padwicket al., “WorldView-2 pan-sharpening,” in Proc. American Society for Photogrammetry and Remote Sensing, pp. 13, ASPRS, Bethesda, Maryland (2010).Google Scholar
L. Alparoneet al., “Multispectral and panchromatic data fusion assessment without reference,” Photogramm. Eng. Rem. Sens. 74(2), 193–200 (2008).PGMEA90099-1112Google Scholar
S. LiZ. LiJ. Gong, “Multivariate statistical analysis of measures for assessing the quality of image fusion,” Int. J. Image Data Fusion 1(1), 47–66 (2010).http://dx.doi.org/10.1080/19479830903562009Google Scholar
A. MakarauG. PalubinskasP. Reinartz, “Multiresolution image fusion: phase congruency for spatial consistency assessment,” in Proc. of ISPRS Technical Commision VII Symposium–100 Years ISPRS–Advancing Remote Sensing Science, Vol. XXXVIII, Part 7B, pp. 383–388, ISPRS, Vienna, Austria (2010).Google Scholar
A. MakarauG. PalubinskasP. Reinartz, “Selection of numerical measures for pan-sharpening assessment,” in Proc. Int. Geoscience and Remote Sensing Symp., pp. 2264–2267, IEEE, Township, NJ (2012).Google Scholar
A. MakarauG. PalubinskasP. Reinartz, “Analysis and selection of pan-sharpening assessment measures,” J. Appl. Rem. Sens. 6(1), 063548 (2012), http://dx.doi.org/10.1117/1.JRS.6.063548.1931-3195Google Scholar
S. Klonus, “Optimierung und Auswirkungen von ikonischen Bildfusionsverfahren zur Verbesserung von fernerkundlichen Auswerteverfahren,” Ph.D. Dissertation, Universität Osnabrück, Germany (2011).Google Scholar
B. Aiazziet al., “MTF-tailored multiscale fusion of high-resolution MS and Pan imagery,” Photogramm. Eng. Rem. Sens. 72(5), 591–596 (2006).PGMEA90099-1112Google Scholar
T. RanchinL. Wald, “Fusion of high spatial and spectral resolution images: the ARSIS concept and its implementation,” Photogramm. Eng. Rem. Sens. 66(1), 49–61 (2000).PGMEA90099-1112Google Scholar
Gintautas Palubinskas received MS and PhD degrees in mathematics from Vilnius University, Vilnius, Lithuania, in 1981, and Institute of Mathematics and Informatics (IMI), Vilnius, Lithuania, in 1991, respectively. His doctoral dissertation was on spatial image recognition. He was a research scientist at the IMI from 1981 to 1997. From 1993 to 1997, he was a visiting research scientist at German Remote Sensing Data Center, DLR; the Department of Geography, Swansea University, Wales, U.K.; Institute of Navigation, Stuttgart University, Germany; Max-Planck-Institute of Cognitive Neuroscience, Leipzig, Germany. Since 1997, he has been a research scientist at German Remote Sensing Data Center (later Remote Sensing Technology Institute), German Aerospace Center DLR. He is the author/coauthor of about 40 papers published in peer-reviewed journals. Current interests are in image processing, image fusion, classification, change detection, traffic monitoring, data fusion for optical and SAR remote sensing applications.