|
1.IntroductionDue to the launch cost and technical limitations, most Earth observation satellites, such as IKONOS, QuickBird, GeoEye, WorldView-2, GaoFen-1, and GaoFen-2, provided a high-resolution (HR) panchromatic (PAN) image and a low-resolution (LR) multispectral (MS) image with several spectral bands.1–4 With the increasing requirement of higher spatial and spectral resolution remote data for various applications, such as feature extraction, land-cover classification, and climate change evaluation, the ability to capture high-quality images with lower cost is becoming increasingly more important. Pansharpening is a special data fusion, which emerged as the practical requirement. It integrates the complementary spectral and spatial characteristics from the provided MS image and the PAN image into the desired product.2 Up to now, a large number of pansharpening approaches have been developed. Broadly, three methodologies have been commonly used, namely, the component substitution, the high-frequency information injection, and the model-based methods. The basic framework of component substitution methods is to transform the MS image into other space using a suitable transformation, and then the intensity channel or the first principal component is substituted by the PAN image. The classical component substitution methods are the intensity–hue–saturation (IHS)4 and principal component substitution.5 These methods may yield poor results in terms of spectral fidelity.6 In addition, due to an inadequate spectral model of the pansharpening method, spectral distortion is also generated. Some variant improvements to the original IHS-based method have been proposed. Rahmani et al.7 proposed an adaptive IHS (AIHS) method, which tried to represent the intensity component by an adaptive linear combination of the MS bands with the combination coefficients obtained by solving an optimization problem. Masoudi and Kabiri8 proposed a new IHS method using texture analysis and genetic algorithm adaption. The high-frequency information injection methods can be briefly summarized as the sequential operations of details extraction and injection.9–12 Since most of the directional and structural information is contained in the PAN image, several researchers proposed using wavelets11 and contourlet transforms12 to capture it from the PAN image. Then, the details missed in the LR MS image can be extracted and supplemented from the PAN image. Compared with the component substitution methods, although this method preserved better spectral information, the spatial distortions may occur accompanied with some blurring and artifacts. The model-based fusion approach is another important category. On the basis of the image restoration, some research regards the solution of the fused image as an inverse optimization problem.13–17 Recently, inspired by sparse representation techniques, some researchers have achieved great success in data fusion.18–21 The initial work is proposed by Li and Yang.18 Then, Jiang et al.19 proposed a two-step sparse coding method with patch normalization (PN-TSSC). Zhu and Bamler20 proposed a new pansharpening method named sparse fusion of images (SparseFI), which explores the sparse representation of MS image patches in a dictionary trained only from the PAN image. To take into account the signal correlation among individual MS channels, the jointly SparseFI (J-SparseFI) algorithm was proposed.21 Although these methods perform well, the sparse coefficients of LR MS image patches are assumed to be identical to the corresponding HR MS patches. Due to complex and variant image content in the real-world images, the mappings between sparse coefficients of LR MS and that of HR MS should be complex.22,23 To address this issue, we propose a pansharpening approach via sparse regression aimed at finding the intrinsic and implicit relationship among the sparse coefficients to improve the robustness and stability of the remote sensing image pansharpening. To achieve this goal, on the one hand, we learn a pair of compact dictionaries relied on the sampled patch pairs from the high- and low-resolution images. The learned dictionary pair can greatly characterize the structural information of the LR MS and HR MS images. On the other hand, taken the complex relationship between the coding coefficients of LR MS and HR MS images into account, we model the complex relationship by a ridge regression and an elastic-net regression. The ridge regression characterizes the relationship of intrapatches. The elastic-net regression describes the relationship of interpatches. The theoretical analyses and experimental results in this paper indicate that the proposed method can generate competitive fusion results. The flowchart of the proposed method is summarized in Fig. 1. 2.Pansharpening via Mapping of Sparse CoefficientsIn this section, the scheme of dictionary learning and the ridge regression of intrapatches are introduced at first. Thereafter, the interpatches regression mapping based on elastic-net model is discussed in detail. Finally, we present the high-resolution MS images reconstruction. 2.1.Dictionary Learning and the Ridge Regression of IntrapatchesThe HR PAN image patches and the corresponding LR PAN image patches have the sparse coefficients and under the corresponding dictionaries and , respectively. Inspired by Wang et al.,22 we make an assumption that there is an implicit mapping function between the sparse coefficients of LR MS and HR MS patch pairs. In addition, the sparse coefficients of LR PAN and HR PAN patch pairs share the mapping function (see Fig. 2), which can be modeled by the following linear ridge equation: where is an unknown error with zero mean. Always, the standard approach is known as ordinary least squares(OLS), which seeks to minimize the sum of square residuals. Mathematically, it solves Eq. (1) of the formHowever, OLS often does poorly in prediction. Thus, corresponding penalization techniques have been proposed to improve OLS to get a particular solution with desirable properties. For example, ridge regression24 and lasso25 are two popular and representative methods. In our work, to get the mapping function with low computational burden and stable least square solution, we impose an -norm regularization penalty. The regularization term is included in the above-mentioned minimization To enforce that the image patch pairs have the corresponding sparse coefficients with respect to HR dictionary and LR dictionary , a joint learning model below is proposed to find the desired dictionary pair as well as the desired intrapatches mapping In the above equation, the PAN image and its degraded version are denoted by and , respectively. and are the ’th patch. , , , and are the regularized parameters, and the terms and are the atoms of the dictionaries and , respectively.Given , , , , , and , Eq. (4) can be rewritten as follows: Obviously, the above objective function is nonconvex in all of the variables but is convex in one of them with the other fixed.26,27 Thus, initialized the mapping function as an identity matrix and the dictionary pairs by DCT basis, the optimization performs in an alternative scheme over three stages. We call them “sparse coefficients update,” “dictionary update,” and “mapping function update,” which are corresponding to Eqs. (6)–(11). In other words, Eq. (5) is solved by translating to three subproblems.In the sparse coefficients update stage, the mapping function and dictionary are fixed, and we can get the sparse coefficients and as follows: Different from the traditional sparse coding, each equation has an extra -norm regularization term. To simplify them, we combine the first and final terms and rewrite these equations as like the form of traditional sparse coding. Given and , Eq. (6) has the following form where is an identity matrix. Then, Eq. (8) can be calculated by least angle regression algorithm.28Similar to the sparse coefficients update stage, with the sparse coefficients fixed, we update the dictionaries according to the following equations: Equations (9) and (10) are quadratic programmings, Lagrange dual technology29 can be used to solve .At last, we update the mapping function fixed the dictionaries and sparse coefficients where . Equation (11) has a closed-form solution2.2.Learning Interpatch Regression Mapping Based on Elastic-Net ModelThe above-mentioned regression mapping learning model just express the relationship of the sparse coefficients of image intrapatches. In Sec. 2.2, the relationship between sparse coefficients and all of the sparse coefficients of LR patches is taken into account. Let be the predictors and be the response.30 Then, the model between response and predictors can be assumed as It can be shown in Fig. 3. The aim of this relationship is to seek the weight vectors .Given ;, the response is centered, and the predictors are standardized. Then, we reformulate Eq. (13) as Based on elastic-net regression model,30 we proposed the following model: where and are the regularized parameters.Let us define , , , and . Then, the elastic-net model can be described as a lasso-type problem The forward–backward operator splitting algorithm31 is employed to solve the problem where . Thus, can be given by the classical iterative soft threshold method32 where . We get2.3.High-Resolution Multispectral Image ReconstructionAfter learned the dictionary-pairs , , the intrapatch mapping function and interpatches mapping function from the HR PAN image patches and the degraded ones, we can reconstruct the HR MS image under the following assumptions:
The sparse coefficients of the ’th band of the LR MS image can be calculated by following step (each band is processed independently): Then, we generate the corresponding coding coefficients associate with the regression function and , respectively The sparse coefficients of the ’th band of the HR MS image is , where is the weight parameter (here, ).Finally, each band can be reconstructed as 3.Experimental Results and AnalysisTo assess the performance of the proposed method, both simulated experiments and real data experiments are carried out. The simulation experiments are based on the strategy proposed by Wald et al.33 First, the original PAN and MS images are blurred with a low-pass filter and downsampled by a decimation factor 4 to obtain a degraded PAN image and MS images. Then, these degraded PAN and MS images are used to yield the HR MS images with the same spatial resolution to the original MS images. Finally, the fused HR MS images are compared with the original MS images. The QuickBird and WorldView-2 data are employed to test the performance of the proposed method. Five typical evaluation metrics are adopted to quantitatively evaluate the pansharpened results. The correlation coefficient (CC)34 and root-mean-square error (RMSE) are calculated for each band between the fused MS images and the reference original MS image. Erreur relative globale adimensionnelle de synthèse (ERGAS)33 and ,35 which are two comprehensive evaluation indexes, provide unique measures of the fusion performance for all the MS bands. Furthermore, the spectral angle mapper (SAM) index34 is also considered to measure the spectral distortion. Smaller values of RMSE, SAM, and ERGAS tend to be achieved by a better fusion result, as do larger CC, and Q4 values. In the real data experiments, since there is no reference HR MS image, the “quality with no reference” (QNR) measurement is used to evaluate different pansharpened results objectively.35 The proposed method is compared with five popular fusion algorithms: AIHS,7 Wavelet,11 PN-TSSC,19 SparseFI,20 and J-SparseFI.21 The results of AIHS method are gotten from the software developed by Rahmani et.al. The implementation of the AIHS method is available online in Ref. 36. The default parameters given in their implementations are adopted. Two levels of decompositions are used for the Wavelet method. The LR patch size in the SparseFI method is . A total of 10,000 patch pairs are selected to construct the dictionary pairs. As to J-SparseFI method, patch size is , and the dictionary size is 1000. For our proposed pansharpening method, there are several parameters to be selected. In our experiments, we set the weight parameter , the regularization parameters , , , and , , the patch size , and the dictionary size 512. In Sec. 3.1, we provided a recipe about how to select these parameters to achieve a promising pansharpened result. Experimental results using parameters selected according to this recipe will be presented through the evolution curves and final fused images on different datasets. 3.1.Parameter AnalysisIn this section, we investigate the effect of the weight parameter and the regularization parameters , , , and , of the proposed method.
Table 1The effects of the proposed method under different λ1 on WorldView-2 data.
The bold and italic values represent the best results. Table 2The effects of the proposed method under different λ2 on WorldView-2 data.
The bold and italic values represent the best results. Table 3The effects of the proposed method under different γ1 on QuickBird data.
The bold values represent the best results. 3.2.Effects of Patch SizeAs described in Sec. 2, our proposed method is based on patches learning. Thus, the effects of patch size on the pansharpening performance are evaluated in this section. To be fair, the dictionary size is fixed as 512. Four different patch sizes for LR PAN image are studied, including , , , and . Figure 6 shows how the patch size affects the visual quality of the fused results on WorldView-2 data. We can observe that the difference of spectral distortion is very small under different patch sizes. To illustrate well the performance change with the patches sizes, the quality indexes are calculated, where the average CC and RMSE of eight bands are presented. In addition, all the values of indexes are normalized to the range [0, 1]. The normalized results with respect to the different patch size are plotted in Fig. 7, where the horizontal axis is the patch size, and the vertical axis is the normalized results. Larger CC and Q4 indicate better fused result, and smaller RMSE, SAM, and ERGAS mean better result. Based on the curves in Fig. 7, it can be seen that when the patch size is , the best performance of the proposed method is obtained. The performance of patch size is slightly worse. However, the proposed method has less space complexity at this time. Taken into account a trade-off between the practical application in the future and the performance, we choose the patch size in the following experiments. 3.3.Effects of Dictionary SizeThe above experimental results, we fix the dictionary size to be 512. In general, larger dictionaries should possess more expressive power and thus may yield more accurate approximation while increasing the computation cost. In this section, we evaluate the effect of dictionary size on pansharpening, i.e., the number of atoms in dictionary. From the sampled image patch pairs, we train four dictionaries of size 256, 512, 1024, and 2048 and apply them to the same remote sensing image. The results are evaluated both visually and quantitatively in Figs. 8Fig. 9–10 and Table 4. Table 4The objective indexes of the pansharpened images with dictionaries of different sizes.
The bold values represent the best results. Figure 8 shows the fused results for the QuickBird image using dictionaries of different sizes. Human visual system is not sensitive to the weak spectral distortions. Always we justify the distortion from the color change.10 Thus, in Fig. 9, we display the difference of pixel values measured between each pansharpened image and the reference HR MS image. Deep blue represents the smallest difference, whereas the red means the largest difference. Figure 9 shows the difference image under different dictionary sizes. While there are not many visual differences for the results using different dictionary sizes from 256 to 2048, we indeed observe the artifacts will gradually diminish with larger dictionaries (i.e., the subtle differences in the yellow circle in Fig. 9). In Table 4, we list five indexes of the pansharpened image for dictionaries of different sizes. As shown in the table, using larger dictionaries will yield better quantitative indexes. However, the computation is approximately linear to the size of the dictionary, i.e., larger dictionaries will result in heavier computation. Figure 10 shows the computation time in seconds with the fused image. In practice, one chooses an appropriate dictionary size as a trade-off between pansharpening quality and computation cost. We find that dictionary size 512 can yield decent outputs while allowing fast computation. 3.4.Simulation Results and AnalysisThe size of the LR MS image in the simulated WorldView-2 data experiment is with eight bands and the corresponding PAN image sized . In Fig. 11(i), there are many varieties of ground objects, such as vegetation, buildings, and roads. In the aspect of the visual effects, the Wavelet method suffers from both spectral distortion and blocky artifacts in the building regions, as shown in Fig. 11(d). Meanwhile, the proposed method and J-SparseFI method show especially significant improvement in the spatial resolution. The details in the final fused images are as clear as that in the PAN image. AIHS and PN-TSSC methods are not good enough at improving the spatial resolutions. In Figs. 11(c) and 11(e), a lot of details are lost in the buildings, and the noise is introduced into the fused images. Compared with AIHS and PN-TSSC methods, SparseFI method is good in injecting more details. However, there are phenomena of the spectral distortions. The color of the whole fused images is dark. The visual effects are not close to the reference HR MS image. To evaluate the performance of various methods objectively, Table 5 presents the objective evaluations about different methods. On the whole, the proposed method demonstrates the best objective performance, i.e., ranking as the first for the CC, RMSE, SAM, ERGAS, and Q4. The CC value of J-SparseFI method is inferior to the proposed method. It means J-SparseFI method cannot preserve the spectral information as well as the latter. The high Q4 value represents its effectiveness of improvement of the spatial resolution. The low SAM and ERGAS values of AIHS, PN-TSSC, and SparseFI methods show their shortage of spectral information of preservation. We also noticed the values of Q4 are very high, which reflect that the SparseFI and J-SparseFI method have the advantage of improving the spatial resolution. Table 5Objective performance for different pansharpening methods on WorldView-2 data.
Figure 12 shows the difference of pixel values measured between each pansharpened image and the reference HR MS image. We can see that all the fused images by different methods have a lot of areas with blue and deep blue from the corresponding difference images. Fortunately, the proposed method shows the best performance. The least area of the red part in Fig. 12(f) indicates that the proposed method has a small number of outliers, which is consistent with the above objective and subjective analysis. 3.5.Experiments on Real DataTo verify the effectiveness of the proposed method in practical applications, we move on to conduct the experiments on the real data. Figures 13 and 14 present the fused results using six different methods. In Fig. 13, we can see that the AIHS method has the problem of textures overenhancement, as shown in the mountain. Figures 13(e)–13(g) are the pansharpened images provided by the PN-TSSC, SparseFI, and J-SparseFI, respectively, which can obtain promising results without causing obvious spectral and spatial distortion. Since our work illustrates the complex mapping relationship between the LR MS and HR MS image, experiments are also performed on complex urban scenes, as shown in Fig. 14. The final pansharpened results of various methods are presented in Fig. 14(c)–14(h). By the comparisons of all results, the component substitution method (AIHS) and the model-based methods (PN-TSSC, SparseFI, J-SparseFI, and proposed) outperform the high-frequency information injection method (Wavelet) in preserving the spectral information. Specifically, the proposed method produces the natural and satisfactory pansharpened images, which has similar spatial structures with the PAN image and similar spectral information with the MS image, as shown in Fig. 14(h). Table 6 and 7 show the quantitative assessment results.35 The ranking of QNR scores can confirm that the proposed method is better than the other methods in sharpening the real data. Table 6Comparison of the proposed method with other methods on real data shown in Fig. 13.
Table 7Comparison of the proposed method with other methods on real data shown in Fig. 14.
3.6.Time ConsumptionAll the methods are implemented in MATLAB® 2010b and run on an Intel Core i7@3.6-GHz PC with 32-GB RAM. For fusing a PAN image and MS images, the AIHS and Wavelet methods need less than 1 s. The PN-TSSC, SparseFI, and J-SparseFI methods take about 147 s, 115 s, and 5 min, respectively. The running time of our proposed method is 133 s. Compared with these methods, the running time of the patch-based methods is still room for improvement. However, it is also reasonable to believe that with the rapid development in computer hardware and computation techniques, the time cost of the proposed method will soon no longer be an issue. We also can learn numerous projection matrices from the LR MS feature spaces to the corresponding HR MS feature spaces, which may decrease the computational time in the pansharpening phase. This will be investigated in our future work. 4.ConclusionIn this paper, combining ridge regression and elastic net of sparse representation, a pansharpening method is proposed for merging a PAN image with an MS image. The method used HR PAN image patches and their degraded ones to construct a training database. Then, the semicoupled dictionary learning method is used to train the LR–HR dictionary pair, which is to depict the MS structural information. The relationship between coding coefficients of LR MS and HR MS image patches is predicted by the within-patch ridge regression and among-patch elastic-net model. Our method can enhance the spatial resolution of MS images while reducing the distortion of spectral. The experimental results demonstrate that the proposed method can be compared with other state-of-the-art fusion methods. AcknowledgmentsThis work was supported in part by the Fundamental Research Funds for the Central Universities (Grant No. LGZD201702), the National Natural Science Foundation of China (Grant Nos. 61302178, 61171165, 11431015, and 61571230), the Foundation of Guangxi Province (Grant No. 2014GXNSFAA118360), the Foundation of Jiangsu Province (Grant No. BK20161055), and the National Scientific Equipment Developing Project of China under Grant No. 2012YQ050250. ReferencesT. M. Tu et al.,
“An adjustable pan-sharpening approach for IKONOS/QuickBird/GeoEye-1/WorldView-2 imagery,”
IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 5
(1), 125
–134
(2012). http://dx.doi.org/10.1109/JSTARS.2011.2181827 Google Scholar
H. T. Yin and S. T. Li,
“Pansharpening with multiscale normalized nonlocal means filter: a two-step approach,”
IEEE Trans. Geosci. Remote Sens., 53
(10), 5734
–5745
(2015). http://dx.doi.org/10.1109/TGRS.2015.2429691 IGRSD2 0196-2892 Google Scholar
S. Z. Tang et al.,
“Pan-sharpening using 2D CCA,”
Remote Sens. Lett., 6
(5), 341
–350
(2015). http://dx.doi.org/10.1080/2150704X.2015.1034882 Google Scholar
T. M. Tu et al.,
“A fast intensity hue-saturation fusion technique with spectral adjustment for IKONOS imagery,”
IEEE Geosci. Remote Sens. Lett., 1
(4), 309
–312
(2004). http://dx.doi.org/10.1109/LGRS.2004.834804 Google Scholar
P. S. Chavez, S. C. Sides and J. A. Anderson,
“Comparison of three different methods to merge multiresolution and multispectral data Landsat TM and SPOT panchromatic,”
Photogramm. Eng. Remote Sens., 57
(3), 295
–303
(1991). http://dx.doi.org/10.1163/157006707X222768 Google Scholar
Y. Zhang,
“Understanding image fusion,”
Photogramm. Eng. Remote Sens., 70
(6), 657
–661
(2004). http://dx.doi.org/10.1080/01431160410001662220 Google Scholar
S. Rahmani et al.,
“An adaptive IHS pan-sharpening method,”
IEEE Geosci. Remote Sens. Lett., 7
(4), 746
–750
(2010). http://dx.doi.org/10.1109/LGRS.2010.2046715 Google Scholar
R. Masoudi and P. Kabiri,
“New intensity-hue-saturation pan-sharpening method based on texture analysis and genetic algorithm-adaption,”
J. Appl. Remote Sens., 8
(1), 083640
(2014). http://dx.doi.org/10.1117/1.JRS.8.083640 Google Scholar
M. C. El-Mezouar et al.,
“A pan-sharpening based on the non-subsampled contourlet transform: application to WorldView-2 imagery,”
IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 7
(5), 1806
–1815
(2014). http://dx.doi.org/10.1109/JSTARS.2014.2306332 Google Scholar
X. Kang, S. Li and J. A. Benediktsson,
“Pansharpening with matting model,”
IEEE Trans. Geosci. Remote Sens., 52
(8), 5088
–5099
(2014). http://dx.doi.org/10.1109/TGRS.2013.2286827 IGRSD2 0196-2892 Google Scholar
X. Otazu et al.,
“Introduction of sensor spectral response into image fusion methods: application to wavelet-based methods,”
IEEE Trans. Geosci. Remote Sens., 43
(10), 2376
–2385
(2005). http://dx.doi.org/10.1109/TGRS.2005.856106 IGRSD2 0196-2892 Google Scholar
A. G. Mahyari and M. Yazdi,
“Panchromatic and multispectral image fusion based on maximization of both spectral and spatial similarities,”
IEEE Trans. Geosci. Remote Sens., 49
(6), 1976
–1985
(2011). http://dx.doi.org/10.1109/TGRS.2010.2103944 IGRSD2 0196-2892 Google Scholar
H. Shen, X. Meng and L. Zhang,
“An integrated framework for the spatio-temporal-spectral fusion of remote sensing images,”
IEEE Trans. Geosci. Remote Sens., 54
(12), 7135
–7148
(2016). http://dx.doi.org/10.1109/TGRS.2016.2596290 IGRSD2 0196-2892 Google Scholar
R. C. Hardie, M. T. Eismann and G. L. Wilson,
“MAP estimation for hyperspectral image resolution enhancement using an auxiliary sensor,”
IEEE Trans. Image Process., 13
(9), 1174
–1184
(2004). http://dx.doi.org/10.1109/TIP.2004.829779 IIPRE4 1057-7149 Google Scholar
R. Molina et al.,
“Variational posterior distribution approximation Bayesian super resolution reconstruction of multispectral images,”
Appl. Comput. Harmon. Anal., 24
(2), 251
–267
(2008). http://dx.doi.org/10.1016/j.acha.2007.03.006 ACOHE9 1063-5203 Google Scholar
L. Zhang et al.,
“Adjustable model-based fusion method for multispectral and panchromatic images,”
IEEE Trans. Syst. Man Cybern. B, 42
(6), 1693
–1704
(2012). http://dx.doi.org/10.1109/TSMCB.2012.2198810 Google Scholar
P. Liu et al.,
“Spatial-Hessian-feature-guided variational model for pan-sharpening,”
IEEE Trans. Geosci. Remote Sens., 54
(4), 2235
–2253
(2016). http://dx.doi.org/10.1109/TGRS.2015.2497966 IGRSD2 0196-2892 Google Scholar
S. Li and B. Yang,
“A new pan-sharpening method using a compressed sensing technique,”
IEEE Trans. Geosci. Remote Sens., 49
(2), 738
–746
(2011). http://dx.doi.org/10.1109/TGRS.2010.2067219 IGRSD2 0196-2892 Google Scholar
C. Jiang et al.,
“Two-step sparse coding for the pan-sharpening of remote sensing images,”
IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 7
(5), 1792
–1805
(2014). http://dx.doi.org/10.1109/JSTARS.2013.2283236 Google Scholar
X. Zhu and R. Bamler,
“A sparse image fusion algorithm with application to pan-sharpening,”
IEEE Trans. Geosci. Remote Sens., 51
(5), 2827
–2836
(2013). http://dx.doi.org/10.1109/TGRS.2012.2213604 IGRSD2 0196-2892 Google Scholar
X. Zhu, C. Grohnfeldt and R. Bamler,
“Exploiting joint sparsity for pansharpening: the J-SparseFI algorithm,”
IEEE Trans. Geosci. Remote Sens., 54
(5), 2664
–2681
(2016). http://dx.doi.org/10.1109/TGRS.2015.2504261 IGRSD2 0196-2892 Google Scholar
S. Wang et al.,
“Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis,”
in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR ’12),
2216
–2223
(2013). http://dx.doi.org/10.1109/CVPR.2012.6247930 Google Scholar
S. Tang et al.,
“Joint dictionary learning with ridge regression for pansharpening,”
in IEEE Int. Geoscience and Remote Sensing Symp.,
613
–616
(2015). http://dx.doi.org/10.1109/IGARSS.2015.7325838 Google Scholar
A. Hoerl and R. Kennard,
“Ridge regression,”
Encyclopedia of Statistical Sciences, 129
–136 Wiley, New York
(1988). Google Scholar
R. Tibshirani,
“Regression shrinkage and selection via the lasso,”
J. R. Stat. Soc. B, 58
(1), 267
–288
(1996). JSTBAJ 0035-9246 Google Scholar
H. Zhang et al.,
“Close the loop: joint blind image restoration and recognition with a sparse representation prior,”
in Proc. IEEE Int. Conf. on Computer Vision,
770
–777
(2011). http://dx.doi.org/10.1109/ICCV.2011.6126315 Google Scholar
S. Tang et al.,
“Edge and color preserving single image superresolution,”
J. Electron. Imaging, 23
(3), 033002
(2014). http://dx.doi.org/10.1117/1.JEI.23.3.033002 JEIME5 1017-9909 Google Scholar
B. Efron et al.,
“Least angle regression,”
Ann. Stat., 32
(2), 407
–499
(2004). http://dx.doi.org/10.1214/009053604000000067 Google Scholar
H. Lee et al.,
“Efficient sparse coding algorithms,”
in Advances in Neural Information Processing Systems,
(2006). Google Scholar
H. Zou and T. Hastie,
“Regularization and variable selection via the elastic net,”
J. R. Stat. Soc. B, 67
(2), 301
–320
(2005). http://dx.doi.org/10.1111/rssb.2005.67.issue-2 Google Scholar
P. L. Combettes and V. R. Wajs,
“Signal processing by proximal forward-backward splitting,”
Multiscale Model. Simul., 4
(4), 1168
–1200
(2006). http://dx.doi.org/10.1137/050626090 Google Scholar
A. Beck and M. Teboulle,
“A fast iterative shrinkage-thresholding algorithm for linear inverse problems,”
SIAM J. Imaging Sci., 2
(1), 183
–202
(2009). http://dx.doi.org/10.1137/080716542 Google Scholar
L. Wald, T. Ranchin and M. Mangolini,
“Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images,”
Photogramm. Eng. Remote Sens., 63
(6), 691
–699
(1997). http://dx.doi.org/10.1016/S0924-2716(97)00008-7 Google Scholar
L. Alparone et al.,
“Comparison of pansharpening algorithms: outcome of the 2006 GRSS data fusion contest,”
IEEE Trans. Geosci. Remote Sens., 45
(10), 3012
–3021
(2007). http://dx.doi.org/10.1109/TGRS.2007.904923 IGRSD2 0196-2892 Google Scholar
L. Alparone et al.,
“A global quality measurement of pan-sharpened multispectral imagery,”
IEEE Geosci. Remote Sens. Lett., 1
(4), 313
–317
(2004). http://dx.doi.org/10.1109/LGRS.2004.836784 Google Scholar
S. Rahmani, M. Strait and D. Merkurjev,
“Pansharpening GUI,”
(2008) http://www.math.ucla.edu/~wittman/pansharpening/ Google Scholar
BiographySongze Tang is currently an assistant professor in the Department of Criminal Science and Technology, Nanjing Forest Police College. He received his BS degree in information and computation science from Anhui Agriculture University, Hefei, China, in 2009, and his PhD in computer science and technology from Nanjing University of Science and Technology, Nanjing, in 2015. His research interests include computer vision and biometrics. He serves as a reviewer for three international academic journals. Liang Xiao is currently a professor at the School of Computer Science, Nanjing University of Science and Technology (NUST). He received his PhD in computer science from NUST, China, in 2004. He has authored two books and around 100 technical articles in refereed journals, including the IEEE Transactions on Image Processing, the IEEE Transactions on Geoscience and Remote Sensing, and Pattern Recognition. His main research areas include computer vision and pattern recognition. Pengfei Liu is currently an assistant professor at the School of Computer Science, Nanjing University of Posts and Telecommunications. He received his BS degree in information and computation science from Hefei Normal University, Hefei, China, in 2011, and his PhD in computer science and technology from Nanjing University of Science and Technology, Nanjing, in 2016. His research interests include computer vision and remote sensing data processing. Lili Huang is currently an associate professor at Guangxi University of Science and Technology, China. She received her PhD in pattern recognition and intelligent systems from Nanjing University of Science and Technology, Nanjing, Jiangsu, China, in 2012. Her research interests cover image processing, image modeling, and superresolution reconstruction. Yang Xu received his PhD in computer science from the Nanjing University of Science and Technology (NUST), Nanjing, China, in 2016. Currently, he is working as an assistant professor in Nanjing University of Science and Technology. His research interests are in the area of hyperspectral image classification, image processing, and machine learning. |