Open Access
23 September 2019 Exploring error of linear mixed model for hyperspectral image reconstruction from spectral compressive sensing
Author Affiliations +
Abstract

Linear mixed model (LMM) has been extensively applied for hyperspectral compressive sensing (CS) in recent years. However, the error introduced by LMM that limits the reconstruction performance has not been given full consideration. We propose an algorithm for hyperspectral CS based on LMM under the assumption of known endmembers. At the sampling stage, only spectral compressive sampling is carried out to keep the abundance information as much as possible. At the reconstruction stage, the proposed algorithm estimates abundance by using linear unmixing from the spectral observed data. Moreover, the model error introduced by LMM is explored; a joint convex optimization scheme for estimation of both abundance and model error is established and solved by the alternating iteration approach to achieve the optimal reconstruction. Experimental results on a real hyperspectral dataset demonstrate that the proposed algorithm significantly outperforms the other state-of-the-art hyperspectral CS algorithms.

1.

Introduction

In recent years, hyperspectral images (HSIs) have been widely used in the fields of ground classification, environmental monitoring, military reconnaissance, and so on, since they contains abundant spatial and spectral information of imaging objects. However, with the continuous improvement of both spatial and spectral resolutions of imaging spectrometer, the amount of acquired hyperspectral data increases exponentially, which brings great challenges to the data storage and transmission. Compressive sensing (CS) technique integrates data acquisition and compression, which collect the sparse or compressible signal at a much lower sampling rate than Nyquist sampling theorem and reconstruct the original signal with a higher probability.1 At present, CS has been successfully applied in HSI compression.26 Hyperspectral CS only captures a small number of sampling data of the original HSI by using incoherent measurement for reconstruction. Compared with traditional compression methods, hyperspectral CS significantly reduces the amount of data that needs to be collected; moreover, it also reduces the resource consumption of imaging. Therefore, hyperspectral CS has attracted remarkable attention in the field of hyperspectral remote sensing.

Hyperspectral CS has two key problems: one is the design of the measurement matrix and the other is the reconstruction algorithm. The core of the former is how to design the measurement matrix so that it can retain the effective information of the original HSI for reconstruction as much as possible; the latter is how to reconstruct the target HSI based on the observed data. In general, the measurement matrix is required to be incoherent with the original HSI, where Gauss random matrix as the measurement matrix is a better choice.7 It should be pointed out that the design of the measurement matrix is beyond the scope of this paper; we mainly focus on the reconstruction of hyperspectral CS. At present, many popular reconstruction algorithms have been proposed. Noor and Jacobs8 proposed a video imaging measurement scheme for the static scene by exploiting sparsity along the time dimension. After acquiring all measurements required for the first frame, the measurements are acquired only from the areas that change in subsequent frames. In Ref. 9, block compressed sensing-smooth projected Landweber (BCS-SPL) employs a block-based random sampling and a projected Landweber reconstruction based on the sparsity in the directional transform domain with a smooth reconstructed image. Note that convex optimization combined with certain prior knowledge has been widely used in the reconstruction of hyperspectral CS. In Ref. 10, by taking advantage of convex optimization method, three-dimensional compressive sampling (3-D CS) performed the reconstruction by exploiting the sparse, low-rank, and three-dimensional (3-D) smoothing characteristics of HSI and achieved good reconstruction results. In Ref. 11, the structure similarity property was explored to improve the reconstruction accuracy beyond the prior hypothesis of low rank, spatial correlation, and spatial smoothness. Zhang et al.2 proposed a reweighted Laplace prior-based hyperspectral CS method to introduce structure information into sparsity prior, which can further improve the reconstruction accuracy. Wang et al.3 adopted the tensor analysis to model spatial correlation and local smoothness, where the tensor tucker decomposition was used to describe the global spatial and spectral correlation and the weighted 3-D total variance (TV) was used to characterize the local smooth image. Another kind of reconstruction algorithm is based on the idea of matrix decomposition using the statistical characteristics. The typical algorithm is the compressive projection principal component analysis (CPPCA)12 and its derivative algorithms.1315 CPPCA shifts the computational burden of principal component analysis from the resource-constrained encoder to the base-station decoder, which significantly improves the computational efficiency.

Linear mixed model (LMM) is explicitly physically meaningful and is usually a good approximation for HSI, in spite of the existence of nonlinear mixing effects.16 In recent years, regarded as an effective priori information, LMM has been successfully applied in hyperspectral CS for reconstruction with high accuracy, which decomposes the HSI into some endmembers and an abundance matrix.17 Based on LMM, the original HSI can be easily reconstructed by the product of endmembers and abundance. With a known spectral library of endmembers, Ramirez et al.18 proposed a spectral unmixing algorithm directly from the observed data acquired by the coded aperture snapshot spectral imaging system, which shows that estimating abundance from the observed data is completely feasible. In Ref. 19, the theoretical lower bound of the observed values for compressed sensing reconstruction using LMM was given, and the abundance was estimated by using the sparse prior under the assumption that the endmembers are known. Assuming that the endmembers are given, Martín et al.16 developed a framework for hyperspectral CS called hyperspectral coded aperture, where abundance can be estimated by solving a convex optimization problem including a data term connected to the measurement matrix and a TV regularization term. Instead of using the given endmembers, Zhang et al.4 proposed a locally similar sparsity-based hyperspectral CS algorithm to decompose the HSI with an established redundant endmember library. Martín and Bioucas-Dias5 proposed a blind hyperspectral reconstruction technique termed spectral compressive acquisition, which supposes that HSI belongs to a low-dimensional subspace that can be learned from the observed data. In our previous work,20 we developed a special kind of measurement matrix that projects the original HSI into spatial and spectral observed data for estimation of endmembers and abundance, respectively, and then spectral unmixing was performed for the reconstruction based on LMM. Based on Ref. 20, a joint optimization model was established to estimate endmembers and abundance, which obtains better reconstruction performance.6

Although the above reconstruction algorithms, by using linear unmixing, have achieved good reconstruction, they all neglected the error introduced by the model itself. In fact, nonlinear mixing still occurs in the spectral mixture model due to multiple reflections from a surface. Therefore, the model error of LMM should be given more attention. In our study, we also work under the assumption that endmembers are known or may be estimated from the original HSI. The main contribution of this paper is to propose a hyperspectral CS reconstruction algorithm based on LMM, in which only spectral sampling is performed on the original HSI to obtain the observed data, and then spectral unmixing is employed for the reconstruction of target HSI based on LMM. In particular, the model error is explored from the observed data by using convex optimization with the TV regularization term, and a joint optimization scheme for estimating abundance and model error is proposed and solved by alternate iteration to obtain the optimal reconstruction quality. Experimental results demonstrate the effectiveness of the proposed algorithm.

The rest of this paper is organized as follows. In Sec. 2, the framework of the proposed algorithm is described. In Sec. 3, the compressive sampling scheme for the proposed algorithm is given. In Sec. 4, the joint optimization scheme for estimating abundance and model error is presented, and in Sec. 5, experimental results on real hyperspectral data are shown. Finally, Sec. 6 gives some concluding remarks.

2.

Framework of the Proposed Algorithm

In general, HSI can be denoted as X=[X1,X2,,XL]TRL×N, where N is the total number of spatial pixels in each band, L is the number of spectral bands, and Xl(1lL) is the vector representation of the l’th band. As one of the simplest and most effective hypotheses, LMM has been widely used in the spectral unmixing of HSI, which assumes that the abundance represents the relative region associated to certain endmember, thus the spectrum is modeled as a linear combination of several endmembers weighted by the corresponding abundances. Let ERL×p be the endmember matrix and SRp×N be the abundance matrix, where p is the number of endmember in HSI. According to LMM, HSI can be written as

Eq. (1)

X=ES+W,
where W represents the model error. As seen from Eq. (1), the reconstruction of the target HSI can be converted to the reconstruction of endmembers, abundance, and model error. At present, a large number of advanced endmember extraction techniques have been proposed. On the other hand, many spectral libraries have been established in succession, such as United States Geological Survey and Jet Propulsion Laboratory. Therefore, it is reasonable to suppose E to be known or estimated from the original HSI. On the premise that E is known, the main goal of our study is to estimate the abundance matrix S and error matrix W from the observed data. As for the sampling scheme of the proposed algorithm, it should be pointed out that spatial sampling that only performs measurement within each band may significantly destroy the abundance information, and spectral sampling is recommended because it can keep abundance information more completely. Moreover, the reconstruction by spectral sampling can make full use of the strong spectral correlation, which is important for improving reconstruction performance. According to the above description, the framework of the proposed algorithm is given in Fig. 1. At the sampling stage, the spectral compressive sampling is performed on X to obtain the observed data Y, where A is the measurement matrix. At the reconstruction stage, the initial values of both S and W are first determined, and then the joint convex optimization method is established to perform alternating iterative solution on S and E; once their optimal values are obtained, the final reconstruction of the target HSI can be achieved according to LMM. In this paper, the proposed algorithm is denoted as spectral compressed reconstruction based on spectral unmixing and error compensation (SCR-SUEC), with more details given in Secs. 3 and 4.

Fig. 1

Framework of the proposed algorithm.

JARS_13_3_036514_f001.png

3.

Spectral Compressive Sampling

The proposed SCR-SUEC performs compressive sampling on the spectral vector at each pixel to obtain available information for the following reconstruction. Let A be the measurement matrix with size of J×L (JL) and YRJ×N be the observed data. Combined with Eq. (1), the spectral sampling processing can be expressed as

Eq. (2)

Y=AX=AES+AW.
It is obvious that Y has the same size with X in spatial dimension. However, the spectral dimension of Y is reduced by A compared with that of X, where the degree of reduction depends on J value. The sampling rate can be calculated by J/L, where a larger J leads to a high sampling rate and a small J leads to a low sampling rate. In this paper, the measurement matrix A is generated by rounding a Gaussian random sampling matrix to achieve a binary matrix according to the desired sampling rates.

The hardware designs for CS applications have been paid more and more attention. Note that digital micromirror device (DMD) array has been successfully applied for compressive sampling. Based on the works in Refs. 5 and 21, the optical design of the proposed algorithm on a pushbroom platform is given in Fig. 2, which mainly contains four elements: the first one is the grating that splits the reflected light from the object into different wavelengths; the second one is the cylindrical lens that can convert the divergent light at different wavelength into parallel light; the third one is the DMD that generates the measurement matrix controlled by a specific program; and, finally, the cylindrical lens sum the reflected light from the DMD to obtain the observed data by using the linear charge-coupled device array.

Fig. 2

A simple scheme of optical design on a pushbroom platform.

JARS_13_3_036514_f002.png

4.

Proposed Reconstruction Algorithm

4.1.

Analysis of Model Error

As mentioned earlier, only spectral sampling is performed on the original HSI, and the abundance information can be well preserved by this sampling manner. According to Eq. (1), if we want to estimate S, it is necessary to obtain the estimation of W first. To deal with this problem, W is temporarily neglected; thus, Eq. (1) becomes X=ES. In this case, the spectral sampling processing can be rewritten as

Eq. (3)

Y=AX=AES.
Because the observed data Y, measurement matrix A, and endmember matrix E are all known, it is easy to obtain the estimation of S, according to Eq. (3), by using least squares (LS) method as

Eq. (4)

S^=[(AE)TAE]1(AE)TY.

It must be pointed out that LMM is only an approximation model of HSI, and the use of LMM to describe HSI will not escape from occurrence of error. Note that S estimated by Eq. (4) is obtained under the condition of neglecting error term of LMM; thus, the final reconstruction can be achieved directly by multiplying E by S. However, there must be a certain degree of inevitable model error between this reconstruction and the original one. Unfortunately, the model error of LMM has not been taken into account in the existing algorithms. In this paper, the proposed SCR-SUEC further explores the model error of LMM from spectral observed data, which is of great significance to improve the final reconstruction performance. Note that it is an nondeterministic polynomial-hard problem to calculate S and W from Y simultaneously. Nevertheless, we can utilize alternating iterations to tackle this problem. According to Eq. (2), once S is estimated, since A and E have been known, we can easily obtain the observed data of W (i.e., AW). Then, it is possible to estimate W from AW by using optimization method with reasonable regularization. To improve the estimation precision, the estimation of W should be further used to update S. Therefore, it is necessary to propose a joint optimization scheme to calculate the optimal estimation of both S and W.

4.2.

Joint Optimization for Abundance and Model Error

In this paper, a joint optimization scheme utilizing an alternating iterative manner to calculate S and W is proposed. That is, first assume that W is fixed and solve S; then fix S and solve W; alternate iteration continues until the stopping criterion is satisfied.

4.2.1.

Estimation of abundance

Given the measurement matrix A and the endmember matrix E, the abundance matrix S can be estimated from the spectral observed data Y, subject to LMM. If the sampling rate is high enough, the dimension of the subspace of original HSI is smaller than the spectral dimension of the observed data Y, that is, p<J. In this case, the estimation of S is actually an overdetermined problem. We can simply estimate S by solving the following constrained optimization problem:

Eq. (5)

minSAXYF2s.t.  X=ES+W,
where CFtrace{CCT} denotes the Frobenius norm of C. Note that Eq. (5) is actually a constrained LS problem, which can be converted to an unconstrained optimization problem as

Eq. (6)

Sk=argminSAES+AWk1YF2+λ1ES+Wk1Xk1F2=argminSCkSDkF2,
where λ1 is a regularization parameter, and the superscript k (k=1,2,3,) represents the k’th alternating iteration. Matrix Ck and Dk can be expressed as

Eq. (7)

Ck=[AEλ1E],Dk=[AWk1Yλ1(Wk1Xk1)].
Now, Sk can be estimated by the LS method as

Eq. (8)

Sk=[(Ck)TCk]1(Ck)TDk,
where CT is the transposed matrix of C and C1 is the inverse matrix of C.

Before the beginning of the joint optimization process, it is necessary to determine the initial values of both S and W. In general, the initial value of W is set to zero; thus, the initial value of S can be obtained as

Eq. (9)

S0=[(AE)TAE]1(AE)TY.

4.2.2.

Estimation of model error

Based on the analysis of the model error in Sec. 4.1, we now estimate W by solving the following optimization problem with TV regularization:

Eq. (10)

minW12AES+AWYF2+λTVTV(W)s.t.  X=ES+W,
where λTV>0 is the regularization parameter of TV regularization term. TV(W) is defined as the spatial anisotropy TV norm of W which can be expressed as

Eq. (11)

TV(W)FW1,1=FvWT1,1+FhWT1,1,
where C1,1i=1Lci1 (ci is the i’th column vector of C), Fv and Fh are the discrete gradient operators in horizontal and vertical directions, respectively. The constrained optimization problem given in Eq. (10) can be translated into an unconstrained optimization problem as

Eq. (12)

Wk=argminW12AESk+AWYF2+λTVFW1,1+λ22ESk+WXk1F2,
where λ2>0 is a regularization parameter. Let U=YAESk, V=Xk1ESk, H1=AW, H2=W, H3=W, H4=FH3, then Eq. (12) can be rewritten as

Eq. (13)

minH1,H2,H3,H412H1UF2+λ22H2VF2+λTVH41,1s.t.  AW=H1,W=H2,W=H3,FH3=H4.
In this paper, alternative direction multiplier method (ADMM) algorithm is employed to solve this optimization problem.22 The augmented Lagrangian function for W, H1, H2, and H3 is

Eq. (14)

L(W,H1,H2,H3,H4,Q1,Q2,Q3,Q4)=12H1UF2+λ22H2VF2+λTVH41,1+μ2AWH1Q1F2+μ2WH2Q2F2+μ2WH3Q3F2+μ2FH3H4Q4F2,
where μ>0 is a positive penalty constant and Q1, Q2, Q3, and Q4 denote the Lagrange multipliers.

Note that each iteration of the ADMM only minimizes the augmented Lagrangian function with respect to one variable and fixes the others to the last iteration value. However, the dual variables (Lagrange multipliers) are also updated by the gradient descent method in each iterative loop. By minimizing the augmented Lagrangian function L with respect to W, we can obtain

Eq. (15)

Wt=argminWL(W,H1,H2,H3,H4,Q1,Q2,Q3,Q4)=argminWμ2AWH1t1Q1t1F2+μ2WH2t1Q2t1F2+μ2WH3t1Q3t1F2=(ATA+2IL)1[AT(H1t1+Q1t1)+(H2t1+Q2t1)+(H3t1+Q3t1)],
where ILRL×L is an identity matrix. The superscript t represents the t’th iteration for calculating W. The optimization problem to calculate H1 at t’th iteration can be written as

Eq. (16)

H1t=argminH1L(W,H1,H2,H3,H4,Q1,Q2,Q3,Q4)=argminH112H1UF2+μ2AWtH1Q1t1F2=11+μ[U+μ(AWtQ1t1)].
Similarly, H2 at the t’th iteration can be obtained by solving the following optimization problem

Eq. (17)

H2t=argminH2L(W,H1,H2,H3,H4,Q1,Q2,Q3,Q4)=argminH2λ22H2VF2+μ2WtH2Q2t1F2=1λ2+μ[V+μ(WtQ2t1)].
H3 at the t’th iteration can be obtained by solving the following optimization problem

Eq. (18)

H3t=argminH3L(W,H1,H2,H3,H4,Q1,Q2,Q3,Q4)=argminH3μ2WtH3Q3t1F2+μ2FH3H4t1Q4t1F2=(FTF+I)1[WtQ3t1+FT(H4t1+Q4t1)],
and H4 at the t’th iteration can be obtained by solving the following optimization problem

Eq. (19)

H4t=argminH4L(W,H1,H2,H3,H4,Q1,Q2,Q3,Q4)=argminH4λTVH41,1+μ2FH3tH4Q4t1F2=soft(FH3tQ4t1,λTVμ),
where soft(*) denotes the component-wise application of the soft-threshold function. At the end of each iteration, for calculating W, the Lagrange multipliers Q1, Q2, Q3, and Q4 are updated by the gradient descent method as follows:

Eq. (20)

Q1t=Q1t1AWt+H1t,Q2t=Q2t1Wt+H2t,Q3t=Q3t1Wt+H3t,Q4t=Q4t1FH3t+H4t.

Note that all functions that form the objective function L in Eq. (14) are closed, proper, and convex. If Eq. (13) exists as an optimal solution, the following constraints will be met.

Eq. (21)

εt=AWH1F+WH2F+WH3F+FH3H4F0as  t.
If Eq. (13) does not have a solution, then ε will diverge. As for the stopping criterion in Algorithm 1, we use εtNεref, where the reference value εref>0. In the later experiments, we empirically set εref equal to 105. In summary, the estimation of W by ADMM is described in Algorithm 1.

Algorithm 1

ADMM for estimating Wk.

Inputs: A, E, Y, Sk
Set parameters: λ2, λTV, and μ
Initialize: W0,H10,,H40, Q10,,Q40, and set t=1
Repeat:
 1. compute Wt by minimizing L with respect to W by Eq. (15);
 2. compute H1t by minimizing L with respect to H1 by Eq. (16);
 3. compute H2t by minimizing L with respect to H2 by Eq. (17);
 4. compute H3t by minimizing L with respect to H3 by Eq. (18);
 5. compute H4t using soft-threshold operation by Eq. (19);
 6. update Lagrange multipliers Q1, Q2, Q3, and Q4 by Eq. (20);
 7. update iteration: t=t+1;
until stopping criterion is satisfied;
Output: model error matrix Wk=Wt.

4.3.

Reconstruction of HSI

The reconstruction of the target HSI in the k’th alternate iteration can be expressed as

Eq. (22)

Xk=ESk+Wk.
The termination parameter is defined as the relative change of the adjacent alternate iteration results

Eq. (23)

ζ=XkXk1FXkF.
Note that the alternate iteration process will be stopped once ζ is smaller than a preset value. In summary, the proposed reconstruction is described in Algorithm 2.

Algorithm 2

The proposed SCR-SUEC algorithm.

Inputs: A, E, Y
Set parameters:λ1, λ2, λTV, and μ
Initialize:W0, S0, and set k=1
Repeat:
 1. compute Sk using constrained LS method by Eq. (8);
 2. compute Wk by Algorithm 1;
 3. compute Xk by Eq. (22);
 4. update iteration k=k+1;
until stopping criterion is satisfied;
Output: The reconstructed HSI X^=Xk.

5.

Experimental Results and Discussion

Experimental results are reported in terms of sampling rates, signal-to-noise ratio (SNR) structural similarity (SSIM) and computational complexity, where SNR can effectively evaluate the reconstruction quality of each algorithm under certain sampling rate. The band-based SNR measured in decibels is defined as

Eq. (24)

SNR(Xl,X^l)=10log10Xl22XlX^l22.
Note that there have been several popular algorithms that were used for hyperspectral CS, such as BCS-SPL,9 3-D CS,10 and CPPCA,12 and several improved algorithms based on CPPCA. To illustrate the efficiency of the proposed algorithm, the reconstruction quality of the proposed SCR-SUEC was compared with that of the above algorithms and the SCR-SUEC without error compensation (SCR-SU). In this paper, we use the raw data of HSIs as the tested data. As we know, raw data are the original data that are acquired by airborne or spaceborne platforms without any preprocessing, which are helpful in evaluating the performance of the algorithm objectively. The raw HSIs acquired by airborne visible infrared imaging spectrometer (AVIRIS) are employed for performance evaluation. The six tested data are referred to aviris_sc0.raw, aviris_sc3.raw, aviris_sc11.raw, aviris_sc18.raw, hawaii_sc01.raw, and maine_sc10.raw.23 Owing to the limitation of 3-D CS algorithms, the spatial size of all tested images were intercepted to 180×180, with 224 spectral bands and 2 bytes for each pixel, where the 90th band of each image is shown in Fig. 3.

Fig. 3

The 90th band of original images of six HSIs: (a) aviris_sc0.raw, (b) aviris_sc3.raw, (c) aviris_sc11.raw, (d) aviris_sc18.raw, (e) hawaii_sc01.raw, and (f) maine_sc10.raw.

JARS_13_3_036514_f003.png

5.1.

Parameters Selection for the Proposed SCR-SUEC

Since the performance of the proposed SCR-SUEC is sensitive to the setting of parameters λ1, λ2, λTV, and μ, we first investigate the reasonable range for the above parameters setting with a sampling rate of 0.5. Figure 4 shows the reconstructed quality by using different values of the above four parameters for the proposed SCR-SUEC. We can empirically observe that the parameter λTV plays a dominant role. Note that the three parameters (λ1, λ2, and λTV) can obtain good reconstruction results within a certain range of values. In general, the recommended ranges of λ1, λ2, and λTV are [0.001, 1], [0.001, 1] and [0.0001, 0.01], respectively. Note that the parameter μ controls the convergence speed of the ADMM algorithm. Adaptive adjustment of parameter μ will perform well in most cases. Similar with the above parameters, the value of parameter μ in a large range can guarantee high reconstruction quality, where the recommended range of μ is [0.005, 1]. As for the proposed SCR-SUEC, according to the above recommended range, λ1 and λ2 are both set to 0.1, λTV is set to 0.003, and μ is set to 0.05.

Fig. 4

Value analysis of three regularization parameters λ1, λ2, and λTV as well as the penalty constant μ for the proposed SCR-SUEC algorithm: (a) λ1, (b) λ2, (c) λTV, and (d) μ

JARS_13_3_036514_f004.png

5.2.

Comparison of Reconstruction Performance

In this study, we assume that the endmembers are known, which can be selected from the spectrum library. Although there have been several off-the-shelf spectrum libraries that provide a large amount of standard spectrum of different objects, those standard ones are obviously different with the spectrum in the actual HSI due to the weather influence, imaging conditions, and so on.4 Therefore, we employed the popular vertex component analysis24 algorithm on each considered HSI to extract a certain amount of endmembers to establish a spectrum library for spectral unmixing, which is an international common practice for such research.

The performance of the proposed SCR-SUEC was first compared with BCS-SPL, 3-D CS, CPPCA, and SCR-SU, with the sampling rates ranging from 0.1 to 0.5 with a step size of 0.1. The relationship between SNR and sampling rates is reported in Fig. 5. Note that the block size employed by BCS-SPL is set to 50×50. As we know, BCS-SPL has only spatial sampling process, while the others have only spectral sampling process. For the sake of fairness, the total sampling rates in all of the above algorithms are identical. It is easy to see that, with the increase of sampling rate, the reconstruction performance of all algorithms improves gradually. However, the growth speed of both SCR-SU and SCR-SUEC is relatively slow compared with the other algorithms. The performance of the above two algorithms has relatively low sensitivity on sampling rates, which indicates that the proposed algorithm can also achieve a perfect performance at a low sampling rate. Although BCS-SPL employs the block-based CS scheme to perform hyperspectral CS, its average performance is the worst, since BCS-SPL handles each band separately without considering the spectral correlation. Another problem for BCS-SPL is the difficulty to select the optimal block size. Note that a large block size provides better performance than a small block size. However, a large block size also leads to a high computational complexity. By utilizing both the spatial and spectral sparsity, 3-D CS can improve the reconstruction performance, compared with BCS-SPL at high sampling rates. However, the performance of 3-D CS is worse than that of BCS-SPL at low sampling rates. In general, CPPCA may perform better than 3-D CS with low computational efficiency. In our study, the performance achieved by CPPCA is better than that achieved by 3-D CS for all tested HSIs except that at a low sampling rate of 0.1 because CPPCA cannot work normally at very low sampling rates, which is a significant drawback for CPPCA.

Fig. 5

Comparison of reconstruction performance using SNR: (a) aviris_sc0.raw, (b) aviris_sc3.raw, (c) aviris_sc11.raw, (d) aviris_sc18.raw, (e) hawaii_sc01.raw, and (f) maine_sc10.raw.

JARS_13_3_036514_f005.png

It is obvious that SCR-SU performs much better than BCS-SPL, CPPCA, and 3-D CS at all sampling rates, which demonstrates the superiority of reconstruction quality by using spectral unmixing. Compared with 3-D CS that uses convex optimization to reconstruct the target HSI, the proposed SCR-SUEC uses convex optimization to reconstruct the error image instead of the target HSI. In general, SCR-SU can be regarded as a special case of SCR-SUEC without error compensation. Since there is model error between ES and X, despite SCR-SU yielding satisfied reconstruction, by exploiting model error, SCR-SUEC can further improve the reconstruction performance. As can be seen, with the lowest sampling rate (i.e., 0.1), since it is really difficult to explore the error image due to the lack of necessary information, SCR-SUEC may degenerate into SCR-SU. Even so, it still provides much better reconstructed quality than other algorithms. With the increase of sampling rate, SCR-SUEC achieves a prominent performance gain compared with SCR-SU since the model error is well explored. Note that when the sampling rate reaches 0.5, the performance gap between SCR-SUEC and SCR-SU can be more than 2 dB. In fact, the onboard hyperspectral data usually have raw format without any correction and other processing; therefore, the experimental results on raw dataset can be more convincing. On the other hand, in terms of onboard compression, low compression ratio is usually employed in order to preserve the information as much as possible. Because low sampling rates can easily destroy significant amounts of information in HSI, higher sampling rates are preferred. In this case, SCR-SUEC can significantly outperform all the other algorithms.

In addition to SNR, SSIM is also used to evaluate the reconstructed quality. The comparison of reconstructed quality using SSIM is reported in Fig. 6. As can be seen, 3-D CS outperforms BCS-SPL significantly. When the sampling rate is lower, such as 0.1, since CPPCA fails at low sampling rates, the SSIM achieved by CPPCA is too small. As for the other sampling rates, the performance of CPPCA is better than the two algorithms mentioned above. Note that SCR-SU performs better than the other algorithms. Moreover, SCR-SUEC can significantly improve SSIM, compared with SCR-SU. Therefore, the results in Fig. 6 further demonstrate the effectiveness of the proposed SCR-SUEC in CS reconstruction for HSIs.

Fig. 6

Comparison of reconstruction performance using SSIM: (a) aviris_sc0.raw, (b) aviris_sc3.raw, (c) aviris_sc11.raw, (d) aviris_sc18.raw, (e) hawaii_sc01.raw, and (f) maine_sc10.raw.

JARS_13_3_036514_f006.png

Compared with objective quality evaluation by using SNR, subjective quality evaluation should also be considered. The reconstructed images of the 90th band of aviris_sc18.raw achieved by various algorithms with a sampling rate of 0.5 are shown in Fig. 7. Because each pixel of the tested images has 2 bytes, the reconstructed images are displayed by using their normalized values. As can be seen from this figure, the visual quality of the reconstructed images by BCS-SPL is not satisfying, which is too rough to distinguish the detailed features for the following processing. Compared with BCS-SPL, 3-D CS only slightly improves the performance, despite considering the spectral sparsity, while CPPCA significantly improves the performance, which effectively retains the detailed features with a higher SNR. Note that SCR-SU outperforms the above three algorithms not only in visual effect but also in SNR, where the detailed textures are more close to the original image. By exploring the model error, SCR-SUEC can further improve the reconstruction quality compared with SCR-SU, where the maximum performance gain can be more than 5 dB for some bands. Although this improvement is not evident in the visual effect, it may be very important for some applications, such as classification, segmentation, and target detection.

Fig. 7

Comparison of reconstructed images of the 90th band of aviris_sc18.raw with a sampling rate of 0.5: (a) BCS-SPL, SNR=19.86  dB, (b) 3-D CS, SNR=31.60  dB, (c) CPPCA, SNR=42.03  dB, (d) SCR-SU, SNR=49.97  dB, and (e) SCR-SUEC, SNR=55.07  dB.

JARS_13_3_036514_f007.png

The comparison of reconstructed images achieved by various algorithms at a low sampling rate of 0.2 is also given in Fig. 8. As can be seen, the quality of the reconstructed images obtained by both BCS-SPL and 3-D CS is poor, and the former has obvious block effect. CPPCA provides general reconstruction quality, which is better than that provided by both BCS-SPL and 3-D CS. Note that the reconstructed images by SCR-SU and SCR-SUEC are very comfortable for eyes, where the detailed features are more distinct than the other reconstructed ones. This indicates that at a lower sampling rate, SCR-SU and SCR-SUEC provide great advantages over the other algorithms. Note that the proposed SCR-SUEC can still achieve a performance gain compared with SCR-SU even at a low sampling rate. To further compare the performance of above algorithms, Fig. 9 shows the comparison of reconstructed images at a low sampling rate of 0.1. It is clear that the quality of reconstructed images obtained by BCS-SPL, 3-D CS, and CPPCA is too bad, where there is serious block effect in the reconstructed images obtained by BCS-SPL and regular horizontal noise strips in the reconstructed images obtained by CPPCA. Although the error exploration can hardly work at very low sampling rate, we can still obtain the reconstruction with much higher quality, which shows the great advantage over BCS-SPL, 3-D CS, and CPPCA.

Fig. 8

Comparison of reconstructed images of the 90th band of aviris_sc18.raw with a sampling rate of 0.2: (a) BCS-SPL, SNR=16.22  dB, (b) 3-D CS, SNR=21.74  dB, (c) CPPCA, SNR=30.44  dB, (d) SCR-SU, SNR=48.16  dB, and (e) SCR-SUEC, SNR=48.78  dB.

JARS_13_3_036514_f008.png

Fig. 9

Comparison of reconstructed images of the 90-th band of aviris_sc18.raw with a sampling rate of 0.1: (a) BCS-SPL, SNR=14.42  dB, (b) 3-D CS, SNR=16.59  dB, (c) CPPCA, SNR=2.97  dB, (d) SCR-SU, SNR=39.31  dB, and (e) SCR-SUEC, SNR=39.33  dB.

JARS_13_3_036514_f009.png

To further illustrate the performance from the perspective of spectral feature preservation, the average spectral angle distance (SAD) is introduced to measure the quality of the reconstructed images, which can be calculated as

Eq. (25)

SAD(r,z)=arccos[k=1Lrkzk/k=1Lrkk=1Lzk],
where r and z are the two spectral curves with L pixels. Note that SAD can effectively evaluate the distortion of spectral curves, where the larger the SAD value, the greater the distortion. Table 1 gives the comparison of average SAD achieved by various algorithms at a sampling rate of 0.5. As can be seen, SCR-SU provides lower SAD for all tested images than BCS-SPL, 3-D CS, and CPPCA, and the proposed SCR-SUEC further reduces SAD values based on SCR-SU, demonstrating its excellent preservation ability of spectral characteristic.

Table 1

Comparison of SAD achieved by various algorithms (sampling rate = 0.5). The smallest SAD for each data is specified in boldface.

BCS-SPL3-D CSCPPCASCR-SUSCR-SUEC
aviris_sc0.raw0.02680.02420.00530.00260.0020
aviris_sc3.raw0.06060.03720.01570.00380.0028
aviris_sc11.raw0.04910.03500.00470.00360.0027
aviris_sc18.raw0.06570.03880.00870.00310.0024
hawaii_sc01.raw0.02300.04900.00490.00380.0029
maine_sc10.raw0.03690.06970.00420.00290.0022

The comparison of spectral curves of aviris_sc0 is given in Fig. 10, where the spatial location is (50, 50) and the sampling rate is 0.2. It is obvious that the spectral curve obtained by the proposed SCR-SUEC has the highest similarity with the original one even at the low sampling rate, which is very important for the postprocessing applications of reconstructed images to achieve better results.

Fig. 10

Comparison of spectral curves at the same spatial location obtained by various algorithms.

JARS_13_3_036514_f010.png

The performance of the proposed algorithm was also compared with some improved CPPCA algorithms, such as class-dependent compressive projection principal component analysis (C-CPPCA),25 MH(U)-CPPCA, MH(NU)-CPPCA, MH(U)-C-CPPCA, and MH(NU)-C-CPPCA,15 where MH denotes the multiple hypotheses, U denotes the uniform grouping, and NU denotes nonuniform grouping. Table 2 gives the comparison of reconstruction performance achieved by various algorithms tested in University of Pavia. As can be seen, SCR-SU performs much better than the other state-of-art algorithms; moreover, the proposed SCR-SUEC can further improve the reconstruction performance, compared with SCR-SU, which demonstrates the superiority of error estimation and compensation.

Table 2

Comparison of reconstruction performance achieved by various algorithms (unit: decibels). The maximum SNR at each sampling rates is specified in boldface.

Sampling rates0.20.30.40.5
CPPCA13.9516.8020.1122.65
C-CPPCA16.5519.6521.5523.62
MH(NU)-CPPCA17.5922.0325.1127.55
MH(U)-CPPCA17.2621.5224.5326.92
MH(NU)-C-CPPCA19.1623.1325.4427.95
MH(U)-C-CPPCA18.9922.4824.9427.34
SCR-SU31.1732.2932.7633.16
SCR-SUEC31.4733.0634.0635.14

5.3.

Comparison of Complexity

As for the computational complexity of the SCR-SUEC algorithm, the most time-consuming step is to calculate W and H3 in each iteration, where the orders of complexity are O(NJ2) and O(JNlogN), respectively. The overall order of complexity in per iteration is given as O(NJ2+JNlogN). Furthermore, we use the average running time per band to evaluate the complexity of each algorithm, which is reported in Table 3. Note that the algorithms were implemented using Matlab2014 on a desktop PC equipped with an Intel Core CPU (3.30 GHz) and 8 GB of RAM memory. As can be seen, BCS-SPL has the highest complexity; meanwhile, its reconstructed performance is the worst. The complexity of 3-D CS is much lower than that of BCS-SPL. CPPCA also has relatively low complexity. SCR-SU provides high reconstructed performance with the lowest complexity. Owing to the iterative operation, the complexity of the proposed SCR-SUEC is much higher than that of SCR-SU. However, SCR-SUEC can outperform all the other algorithms in terms of reconstructed performance. Because the reconstruction process is carried out on the ground, where the computing resources are often sufficient, therefore, a certain degree of increase in complexity will not have a significant impact on practical application.

Table 3

Comparison of complexity achieved by different algorithms (unit: seconds).

Running time/band
BCS-SPL5.612
3-D CS3.179
CPPCA0.005
SCR-SU0.001
SCR-SUEC3.317

6.

Conclusion

In this paper, a reconstruction algorithm from spectral compressive sampling by exploring model error of LMM is proposed. Supposing the endmembers are known, compressive sampling only performed on the spectral direction of the original HSI to retain the information of abundance. At the reconstruction stage, considering the limitation of LMM, the error introduced by LMM is further explored by solving an optimization problem with the TV regularization. To further improve the reconstruction accuracy, a joint optimization scheme for estimating abundance and model error is proposed and an alternate iterative updating manner for their calculation is carried out. Once we obtain the optimal results of abundance and model error, the final reconstruction of the target HSI can be achieved according to LMM. Experimental results on several real hyperspectral datasets demonstrate that the proposed SCR-SUEC outperforms the other classical algorithms in terms of both SNR and visual effect. In future research, more powerful sparse representation should be explored for the estimation of model error, which may further improve reconstruction performance, especially under low sampling rates.

Acknowledgments

This work was supported by the Overseas Visiting and Research Project for Excellent Young Key Talents in Higher Education Institutions in Anhui Province (No. gxgwfx2019056), the Quality Engineering Project of Universities of Anhui Province (No. 2016zy126), and Chongqing Research Program of Basic Research and Frontier Technology (No. cstc2016jcyjA0539). The authors would like to thank the anonymous reviewers for their outstanding comments and suggestions, which greatly improved the technical quality of this paper.

References

1. 

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, 52 (4), 1289 –1306 (2006). https://doi.org/10.1109/TIT.2006.871582 IETTAW 0018-9448 Google Scholar

2. 

L. Zhang et al., “Exploring structured sparsity by a reweighted Laplace prior for hyperspectral compressive sensing,” IEEE Trans. Image Process., 25 (10), 4974 –4988 (2016). https://doi.org/10.1109/TIP.2016.2598652 IIPRE4 1057-7149 Google Scholar

3. 

Y. Wang et al., “Compressive sensing of hyperspectral images via joint tensor Tucker decomposition and weighted total variation regularization,” IEEE Geosci. Remote Sens. Lett., 14 (12), 2457 –2461 (2017). https://doi.org/10.1109/LGRS.2017.2771212 Google Scholar

4. 

L. Zhang et al., “Locally similar sparsity-based hyperspectral compressive sensing using unmixing,” IEEE Trans. Comput. Imaging, 2 (2), 86 –100 (2016). https://doi.org/10.1109/TCI.2016.2542002 Google Scholar

5. 

G. Martín and J. M. Bioucas-Dias, “Hyperspectral blind reconstruction from random spectral projections,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 9 (6), 2390 –2399 (2016). https://doi.org/10.1109/JSTARS.2016.2541541 Google Scholar

6. 

L. Wang et al., “Compressed sensing reconstruction of hyperspectral images based on spectral unmixing,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 11 (4), 1266 –1284 (2018). https://doi.org/10.1109/JSTARS.2017.2787483 Google Scholar

7. 

J. Hahn et al., “Compressive sensing and adaptive direct sampling in hyperspectral imaging,” Digital Signal Process., 26 113 –126 (2014). https://doi.org/10.1016/j.dsp.2013.12.001 DSPREJ 1051-2004 Google Scholar

8. 

I. Noor and E. L. Jacobs, “Adaptive compressive sensing algorithm for video acquisition using a single-pixel camera,” J. Electron. Imaging, 22 (2), 021013 (2013). https://doi.org/10.1117/1.JEI.22.2.021013 JEIME5 1017-9909 Google Scholar

9. 

S. Mun and J. E. Fowler, “Block compressed sensing of images using directional transforms,” in Proc. IEEE Int. Conf. Image Process., 3021 –3024 (2009). https://doi.org/10.1109/ICIP.2009.5414429 Google Scholar

10. 

X. B. Shu and N. Ahuja, “Imaging via three-dimensional compressive sampling (3DCS),” in Proc. IEEE Int. Conf. Comput. Vision, 439 –446 (2011). https://doi.org/10.1109/ICCV.2011.6126273 Google Scholar

11. 

Y. B. Jia, Y. Feng and Z. L. Wang, “Reconstructing hyperspectral images from compressive sensors via exploiting multiple priors,” Spectrosc. Lett., 48 (1), 22 –26 (2014). https://doi.org/10.1080/00387010.2013.850727 SPLEBX 0038-7010 Google Scholar

12. 

J. E. Fowler, “Compressive-projection principal component analysis,” IEEE Trans. Image Process., 18 (10), 2230 –2242 (2009). https://doi.org/10.1109/TIP.2009.2025089 IIPRE4 1057-7149 Google Scholar

13. 

W. Li, S. Prasad and J. E. Fowler, “Integration of spectral-spatial information for hyperspectral image reconstruction from compressive random projections,” IEEE Geosci. Remote Sens. Lett., 10 (6), 1379 –1383 (2013). https://doi.org/10.1109/LGRS.2013.2242043 Google Scholar

14. 

N. H. Ly, Q. Du and J. E. Fowler, “Reconstruction from random projections of hyperspectral imagery with spectral and spatial partitioning,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 6 (2), 466 –472 (2013). https://doi.org/10.1109/JSTARS.2012.2217942 Google Scholar

15. 

C. Chen et al., “Reconstruction of hyperspectral imagery from random projections using multihypothesis prediction,” IEEE Trans. Geosci. Remote Sens., 52 (1), 365 –374 (2014). https://doi.org/10.1109/TGRS.2013.2240307 IGRSD2 0196-2892 Google Scholar

16. 

G. Martín, J. M. Bioucas-Dias and A. Plaza, “HYCA: a new technique for hyperspectral compressive sensing,” IEEE Trans. Geosci. Remote Sens., 53 (5), 2819 –2831 (2015). https://doi.org/10.1109/TGRS.2014.2365534 IGRSD2 0196-2892 Google Scholar

17. 

J. M. Bioucas-Dias et al., “Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 5 (2), 354 –379 (2012). https://doi.org/10.1109/JSTARS.2012.2194696 Google Scholar

18. 

A. Ramirez, G. R. Arce and B. M. Sadler, “Spectral image unmixing from optimal coded-aperture compressive measurements,” IEEE Trans. Geosci. Remote Sens., 53 (1), 405 –415 (2015). https://doi.org/10.1109/TGRS.2014.2322820 IGRSD2 0196-2892 Google Scholar

19. 

M. Golbabaee, S. Arberet and P. Vandergheynst, “Compressive source separation: theory and methods for hyperspectral imaging,” IEEE Trans. Image Process., 22 (12), 5096 –5110 (2013). https://doi.org/10.1109/TIP.2013.2281405 IIPRE4 1057-7149 Google Scholar

20. 

Z. L. Wang, Y. Feng and Y. B. Jia, “Spatio-spectral hybrid compressive sensing of hyperspectral imagery,” Remote Sens. Lett., 6 (3), 199 –208 (2015). https://doi.org/10.1080/2150704X.2015.1024892 Google Scholar

21. 

J. E. Fowler, “Compressive pushbroom and whiskbroom sensing for hyperspectral remote-sensing imaging,” in Proc. IEEE Int. Conf. Image Process., 684 –688 (2014). https://doi.org/10.1109/ICIP.2014.7025137 Google Scholar

22. 

S. Boyd et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., 3 (1), 1 –122 (2010). https://doi.org/10.1561/2200000016 Google Scholar

23. 

Jet Propulsion Lab., “AVIRIS free data,” (2008) http://aviris.jpl.nasa.gov/html/aviris.freedata.html Google Scholar

24. 

J. M. P. Nascimento and J. M. B. Dias, “Vertex component analysis: a fast algorithm to unmix hyperspectral data,” IEEE Trans. Geosci. Remote Sens., 43 (4), 898 –910 (2005). https://doi.org/10.1109/TGRS.2005.844293 IGRSD2 0196-2892 Google Scholar

25. 

W. Li, S. Prasad and J. E. Fowler, “Classification and reconstruction from random projections for hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens., 51 (2), 833 –843 (2013). https://doi.org/10.1109/TGRS.2012.2204759 IGRSD2 0196-2892 Google Scholar

Biography

Zhongliang Wang received his PhD in signal and information processing in 2015 from the School of Electronics and Information, Northwestern Polytechnical University, Xi’an, China. Currently, he is an associate professor at the Department of Electric Engineering, Tongling University, Tongling, China. His research interests include image enhancement and hyperspectral remote sensing.

Mi He received her PhD in information and communication engineering in 2012 from the National University of Defense and Technology, Changsha, China. Since 2015, she has been an associate professor at the College of Biomedical Engineering and Imaging Medicine, Army Medical University. Her research involves remote sensing and biomedical signal processing.

Zhen Ye received her PhD in information and communication engineering from the Northwestern Polytechnical University, Xi’an, China, in 2015. She spent one year as an exchange student at Mississippi State University. Currently, she is an associate professor at the School of Electronics and Control Engineering, Chang’an University. Her research interests include hyperspectral image (HSI) analysis, pattern recognition, and machine learning.

Yongjian Nian received his PhD in information and communication engineering from the National University of Defense and Technology, Changsha, China. He is an associate professor at the College of Biomedical Engineering and Imaging Medicine, Army Medical University. His research fields include HSI processing, pattern recognition, and machine learning.

Liang Qiao received his PhD in biomedical engineering in 2017 from Army Medical University, Chongqing, China. He is an associate professor at College of Biomedical Engineering and Imaging Medicine, Army Medical University. His research interests include pattern recognition and machine learning.

Mingsheng Chen received his PhD from the College of Electronic Science and Engineering, National University of Defense Technology, Changsha, in 2012. From 2014 to 2018, he studied as postdoctoral fellow at the College of Biomedical Engineering and Imaging Medicine at Army Medical University, Chongqing. His research interest fields include pattern recognition and machine learning.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Wang Zhongliang, Mi He, Zhen Ye, Yongjian Nian, Liang Qiao, and Mingsheng Chen "Exploring error of linear mixed model for hyperspectral image reconstruction from spectral compressive sensing," Journal of Applied Remote Sensing 13(3), 036514 (23 September 2019). https://doi.org/10.1117/1.JRS.13.036514
Received: 15 May 2019; Accepted: 5 September 2019; Published: 23 September 2019
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Reconstruction algorithms

Signal to noise ratio

Hyperspectral imaging

3D image reconstruction

Error analysis

3D image processing

Compressed sensing

Back to Top