Nonlocal weighted sparse unmixing based on global search and parallel optimization

Abstract. Sparse unmixing (SU) can represent an observed image using pure spectral signatures and corresponding fractional abundance from a large spectral library and is an important technique in hyperspectral unmixing. However, the existing SU algorithms mainly exploit spatial information from a fixed neighborhood system, which is not sufficient. To solve this problem, we propose a nonlocal weighted SU algorithm based on global search (G-NLWSU). By exploring the nonlocal similarity of the hyperspectral image, the weights of pixels are calculated to form a matrix to weight the abundance matrix. Specifically, G-NLWSU first searches for a similar group of each pixel in the global scope then uses singular value decomposition to denoise and finally obtains the weight matrix by considering correlations between similar pixels. To reduce the execution burden of G-NLWSU, we propose a parallel computing version of G-NLWSU, named PG-NLWSU, which integrates compute unified device architecture-based parallel computing into G-NLWSU to accelerate the search for groups of nonlocally similar pixels. Our proposed algorithms shed new light on SU by considering a new exploitation process of spatial information and parallel computing scenario. Experimental results conducted on simulated and real datasets show that PG-NLWSU is superior to comparison algorithms.


Introduction
The rapid development of hyperspectral remote sensing technology has resulted in its wide use, such as in geological prospecting, target detection, and environmental surveillance. 1 However, owing to the low spatial resolution of the sensor, the spatial complexity will lead to the existence of mixed pixels in a hyperspectral image (HSI), i.e., different substances form a pixel. 2 Linear spectral unmixing, also known as a linear mixing model (LMM), is a standard method to solve the problem of mixed pixels, which assumes that the observed spectral signal can be expressed linearly through a set of pure spectral signatures and their corresponding abundance weights. 3 LMMs can be categorized as either unsupervised-or semisupervised-based unmixing algorithms. Unsupervised-based unmixing methods require endmember extraction and abundance estimation steps, 4,5 which can be classified as geometric-, [6][7][8][9][10][11][12][13][14][15][16][17][18] statistic-, [19][20][21][22][23][24] or spatial-informationbased. [25][26][27][28][29] Geometric-based algorithms usually assume that desired pure pixels exist in an image, but they usually fail if the image mixing degree is high. Statistic-based algorithms can elegantly solve problems of a high mixing degree, but they only consider spectral information and ignore correlations between pixels. Spatial-information-based algorithms fuse spatial and spectral information for endmember extraction and have attracted huge attention.
Compared with the unsupervised methods, semisupervised-based unmixing algorithms, also known as sparse unmixing (SU) algorithms, solve the spectral unmixing problem by introducing a large public spectral library. It assumes that the observed spectral signal can be represented by a *Address all correspondence to Wenxing Bao, baowenxing@nun.edu.cn linear combination of a few spectral signatures in the spectral library. Therefore, the SU problem is equivalent to searching for an optimal subset from the spectral library to simulate mixed pixels in the scene. 30 However, the number of spectra in the library is much larger than the number of endmembers, implying that an abundance matrix usually has only a few nonzero entries, i.e., it is sparse. Bioucas-Dias and Figueiredo 31 proposed a variable splitting and augmented Lagrangianbased sparse unmixing algorithm (SUnSAL), which uses an alternating direction multiplier method (ADMM) to optimize this constrained sparse regression problem. To enhance the sparsity of a solution, Iordache et al. 32 proposed a new SUnSAL extension by considering collaborative sparsity, i.e., the l 2;1 norm, to simultaneously boost row sparseness of the abundance. Sun et al. 33 proposed an l p ð0 < p < 1Þ norm to be used as a new sparsity regularizer. Cheng et al. 34 introduced weighted l 1 norm minimization to penalize nonzero sparse terms in sparse solutions.
HSI not only contains a large amount of spectral information but also has rich spatial information. Iordache et al. 35 proposed an SU algorithm with variable splitting augmentation Lagrangian and total variation (SUnSAL-TV), where the total variation (TV) regularizer explores spatial neighborhood information. When the TV regularizer is only used, it is called nonnegative least squares TV (NCLS-TV). Zhang et al. 36 proposed a local collaborative SU that combines neighborhood weights with cooperative sparse regression to achieve better unmixing results than SUnSAL-TV. Based on the weighted l 1 strategy, Wang et al. 37,38 proposed a double weighted SU algorithm to enhance the sparsity of abundances from the spectral and spatial domains, and a variant with TV regularization. Wang et al. 39 introduced graph regularization to improve unmixing performance. Zhang et al. 40 proposed a spectraspatial weighted SU model using the neighborhood information of the image in a weight matrix. Zhong et al. 41 proposed a nonlocal TV regularizer for sparse unmixing (NLSU). Feng et al. 42 improved NLSU and extracted fixed spatial information from the original image.
However, the unmixing algorithm that combines spatial information still has two problems: (1) the global spatial information is not considered. Local and nonlocal-based methods find similar pixels from the neighborhood and large local blocks, respectively; (2) the spatial regularization term makes the model more complicated. In this paper, we propose a nonlocal weighted SU model, PG-NLWSU, based on global search and parallel acceleration. Inspired by a nonlocal mean algorithm, PG-NLWSU first searches similar blocks of all pixels from the whole image and uses singular value decomposition (SVD) to denoise them. The explored nonlocal information is used to weight an abundance matrix with the l 1 -norm to generate a solution with stronger sparsity. To search similar blocks is generally time consuming. PG-NLWSU uses parallel computing to enhance its performance. We make three primary contributions: 1. We propose a global search strategy to exploit nonlocal similar blocks from whole images. 2. The exploited nonlocal information is used as a weight matrix to penalize the abundance matrix with the l 1 norm. 3. Parallel computing is applied to spatial information extraction to improve the algorithm's efficiency.
Several experiments conducted on synthetic and real hyperspectral datasets indicate that PG-NLWSU significantly improves upon current SU algorithms.
The rest of this article is organized as follows. Section 2 describes SU theory. Section 3 discusses the PG-NLWSU algorithm. Section 4 details experiments on simulated and real datasets. Section 5 discusses the advantages and disadvantages of the algorithm and relates directions of future research.

LMM
The LMM assumes that the mixed pixels in an HSI can be expressed as a linear combination of the endmember spectrum and their corresponding abundances, given as where y ∈ R L×1 , M ∈ R L×q , α ∈ R q×1 , and n ∈ R L×1 denote the observed mixed pixel spectrum, endmember spectral vectors, abundance vector, and error term, respectively. α generally has two constraints: abundance non-negative constraint (ANC), i.e., α ≥ 0, and the abundance sum-to-one constraint (ASC), i.e., 1 T α ¼ 1.

Sparse Unmixing
SU introduces a large spectral library to the LMM, given by E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 6 3 7 where Y ∈ R L×p is an HSI with L spectral bands and P pixels, A ∈ R L×M is a spectral library with M endmembers, X ∈ R M×p is the fractional abundance matrix corresponding to all endmembers in the spectral library, and N ∈ R L×M is the error that affects each pixel. Since an HSI is usually composed of only a small number of endmember spectral, the fractional abundance matrix X is sparse, which can minimize an SU model and regularize the abundance matrix X with a sparsity regularizer E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 6 ; 5 3 3 where k · k F is the F norm that represents the error between the measured value and real data, k · k 0 is an l 0 norm sparsity constraint, and λ is a regularization parameter. It is worth noting that owing to the existence of spectral variability in hyperspectral data, only the ANC is normally considered, and the ASC is not considered. In addition, because Eq. (3) is nonconvex, which is hard to solve, the l 1 norm sparsity constraint where ⊙ represents the point multiplication operation, W ∈ R M×p is obtained as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 6 ; 3 1 1 where ε is an extremely small constant and X t ij is the value of the ði; jÞ'th element of the abundance matrix X in the t'th iteration.

Nonlocal Weighted Sparse Unmixing Method Based on
Global Search

Nonlocal Similarity Block Based on Global Search
Nonlocal information, as important spatial information, is widely used in HSI classification, 44,45 denoising, 46 and unmixing. 47 However, nonlocal-based methods tend to use a local search box strategy to extract nonlocal similar pixels, 44-47 which requires a large number of local search boxes. Although this search strategy improves execution efficiency, it fails to find many similar pixels that may be far from a fixed search area. We use a global search strategy to extract similar pixels, which can exploit spatial information well. In particular, abundance images and HSIs have similar spatial characteristics. 48 As shown in Fig. 1, the following operations are performed on the abundance image: 1. X, for the 8-neighborhood block S x centered on x, j neighborhood blocks S 1 ∼ S j with the highest similarity were searched in the whole image, with similarity discrimination method E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 1 1 6 ; 2 7 3 where S x and S y represent the neighborhood block centered on x and y, respectively, g is a Gaussian kernel function, and ⊗ denotes the convolution operation. 2. Extract the center pixels of j similar blocks and reorganize them into a similar pixels group P x ¼ ½s 1 ∼ s j T and then perform a low-rank noise reduction on P x to remove noises E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 8 ; 1 1 6 ; 1 8 1 where svdð·Þ is the SVD function, and the following operations are performed: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 6 ; 1 3 8 where U denotes a left singular vector matrix, Σ represents the diagonal matrix, where its the diagonal elements are the singular values in descending order, and V denotes a right singular vector matrix. The largest k singular values in Σ and their corresponding left and right singular vectors are selected for reconstruction of P x . 3. Perform a weighted summation on P x to obtain the nonlocal similarity value NLSðxÞ corresponding to x E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 0 ; 1 1 6 ; 7 1 1 where the s i are the center pixels of the i'th similar blocks of the pixel x, and h is a smoothing parameter: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 1 1 6 ; 6 5 6 NLSðxÞ ¼ WðiÞ × P x ði; ∶Þ; where P x ði; ∶Þ is the i'th similar pixel of x, WðiÞ is the weight corresponding to the i'th similar pixel. 4. Perform the above three steps for all pixels to get the final nonlocal similarity matrix NLSðXÞ.

Nonlocal Weighted Sparse Unmixing Method
The proposed nonlocal weighted algorithm, G-NLWSU, is described mathematically as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 1 1 6 ; 5 0 1 with weighting matrix E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 3 ; 1 1 6 ; 4 4 7 Algorithm 1 Pseudocode of G-NLWSU algorithm.
1: Initialization: 5: Repeat: 9: 10: Update LaGrange multipliers: where NLSð·Þ is a nonlocal similarity matrix extracted from the abundance matrix in each iteration, which is calculated by Eq. (11). Equation (12) can be expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 4 ; 1 1 6 ; 6 9 9 where ι Rþ ðXÞ ¼ P n i¼1 ι Rþ ðX i Þ is an index function, X i is the i'th column of X, and if X i is nonnegative, then ι Rþ ðX i Þ ¼ 0, and otherwise ι Rþ ðX i Þ ¼ þ∞.
The equivalent constrained formula for Eq. (14) is ; t e m p : i n t r a l i n k -; e 0 1 5 ; 1 1 6 ; 6 2 1 Algorithm 1 shows the algorithmic pseudocode.

Parallel Computing-Based G-NLWSU
HSI data have extremely high dimensions and complex topographic features, which means they require much computational time to perform an unmixing task. With recent hardware developments, graphics processing units (GPUs) have been used with large-scale data in parallel computing strategies, such as to improve computational efficiency in hyperspectral classification. 49,50 We introduce a parallel computing-based G-NLWSU method, PG-NLWSU. The computing strategy considers the Compute Unified Device Architecture (CUDA) platform, a generalpurpose computing architecture developed by NVIDIA that uses the parallel computing power of a GPU. CUDA controls the GPU to perform calculations through kernel functions. Since its layered structure (grid, block, and thread) is similar to that of three-dimensional images, it can be intuitively applied to the field of images. We use a hybrid programming strategy based on MATLAB and CUDA to accelerate the algorithm.
The data interaction between the host (multicore central processing units, CPUs) and device (GPUs) usually consumes much computational time and storage space. To alleviate these problems, we minimize the number of data transfers between the host and device, load some important calculation tasks to the GPU, return the results to the CPU, and use space preallocation or memory reduction to reduce memory consumption. After finishing calculations in the GPU, the occupied memory is released.
The complexity of W NLS in each iteration is OðMPSÞ, where S is the number of sub-blocks of the image. To improve efficiency, we optimize the algorithm as follows: 1. The data of the matrix U are transferred to the GPU before the outer loop, and the nonlocal similar block operation is performed by the NLS_kernel function. 2. A grid, composed of thread, blocks, and threads in stages, is configured in the kernel function. This is shown in Fig. 2

Computational Complexity Analysis
The experiment in this article was performed using MATLAB R2018a and CUDA v10.2, a GTX1060 GPU, and 16 GB of memory.
For G-NLWSU, the most expensive computational step is W NLS , which has order of complexity OðMPSÞ, where P is the total number of pixels of the image and S is the number of subblocks. For U, let with respective complexity OðLM 2 Þ, OðM 3 Þ, OðMPLÞ, OðPM 2 Þ. It is worth noting that P and S are usually greater than L and M, so the complexity of U is OðMP · maxfL; MgÞ. The complexity of V t 1 and D t 1 is OðMPLÞ, and the complexity of other steps is OðMPÞ. In PG-NLWSU, the computational complexity of W NLS is O½MS · ðP∕CÞ, where C is the parallel computing power of the GPU.

Experiments and Analysis
In this section, we evaluated PG-NLWSU on simulated and real datasets.

Evaluation Metrics
We consider four evaluation metrics to clearly compare the performance of the algorithms.

4:
Step 1. Transfer data U from host to device.

6:
Step 3. Transfer P x from device to host.
on CPU.
12: until some stopping criterion is satisfied. where X is the estimated abundance matrix,X is the true abundance matrix, and Eð·Þ denotes expectation. The larger the SRE, the smaller the error between the estimated and true abundance matrices. (2) Probability of success (P s ), which is an estimate of the probability that the relative error capability is less than a certain threshold: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 7 ; 1 1 6 ; 6 7 5 where X is the estimation matrix andX is the true matrix. According to previous work in SUnSAL-TV, we assume threshold⩽3.16 ð5 dBÞ. When P s ¼ 1, then the total relative error power of abundance is less than 1∕3. 16. We use P s to supplement the stability evaluation index of the algorithm (SRE alone cannot derive the stability of the estimate).
where s is the number of elements in X greater than 0.005, and mn is the total number of elements in the matrix X. The smaller the Sparsity the higher the sparsity of the matrix.
where X ij ,X ij are the respective element values at position ði; jÞ in the estimation matrix and real matrix, m and n are the matrix dimensions. The smaller the RMSE, the smaller the error between the measured and true values.
Based on the above descriptions, the evaluation metrics, i.e., SRE and RMSE, are used to evaluate the error of algorithm unmixing results. P s was used to evaluate the stability of the algorithm. Sparsity focused on comparing the sparsity of unmixing results.

Experiment on Simulated Datasets
PG-NLWSU was tested on two simulated datasets to validate its unmixing performance. To facilitate the comparison, SUnSAL, CLSUnSAL, NCLS-TV, and SUnSAL-TV were run on the simulated dataset to evaluate the algorithm results.

Simulated datasets
The spectral library A ∈ R 224×240 used in the simulated data experiment is a combination of 240 mineral types from the United States Geological Survey (USGS) library. The reflectance values of the spectral characteristics are given in 224 spectral bands and are evenly distributed in the range of 0.4 to 2.5 μm.
1. Simulated data cube 1 (DC1) is composed of nine endmember features randomly selected from spectral library A. 51 It contains 100 × 100 pixels and 224 bands, with added Gaussian noise (SNR ¼ 30, 40, 50). The ground truth abundances of each endmember are shown in Fig. 3. Fig. 4, simulated data cube 2 (DC2) is composed of five endmembers, and the abundance matrix is generated using the hyperspectral imagery synthesis (EIAS) toolbox. The abundance distribution of the endmembers in DC2 is relatively uniform and contains much spatial information. Independent and uniformly distributed Gaussian noise (SNR ¼ 30, 40, 50) is added.

Influence of regularization parameters
We tested the effect of regularization parameters of different algorithms on the SRE in the range of 0 to 0.1 on DC1. The results are shown in Fig. 5. Among the algorithms, PG-NLWSU had the  highest accuracy. Comparing the curves of NCLS-TV and SUnSAL-TV, it can be seen that the spatial regularization parameter λ TV has a greater influence than λ on the algorithm accuracy. By comparing SUnSAL, CLSUnSAL, NCLS-TV, and PG-NLWSU, it can be seen that the unmixing algorithm using spatial information more easily obtains the optimal solution.

Optimization effect analysis
We compare the speed of nonlocal similarity block extraction (the outer loop) between G-NLWSU and PG-NLWSU on the following data: (1) DC1: data size 100 × 100 × 224.
The results are shown in Table 1. Speed is defined as the ratio of the running times of the two algorithms. According to the information in the table, as image size and amount of information increase, the advantages of parallel acceleration become more obvious.

Results on simulated data sets
We show the experimental results of PG-NLWSU and comparison algorithms on two simulated datasets. Figure 6 shows the unmixing results of different algorithms on endmembers 1, 3, 7, 8, and 9 of DC1 when SNR ¼ 30. Figure 7 shows the unmixing results of all the endmembers on DC2 for different algorithms when SNR ¼ 30. Table 2 shows the optimal parameters of all algorithms when SNR is fixed at 30, 40, and 50 dB on simulated data sets DC1 and DC2. Tables 3  and 4 show the index evaluation results of all algorithms on DC1 and DC2 when the parameters are optimal. In Tables 3 and 4, bold represents the best experimental result. As can be seen from Figs. 6 and 7, the endmember abundance results estimated by SUnSAL are relatively poor. The result of CLSUnSAL is similar to that of SUnSAL but contains more information. Comparing NCLS-TV and SUnSAL-TV, one can find that TV regularization leads to excessive smoothing of abundance, which is obvious in the results of DC1 endmembers 7 and 8 and DC2 endmember 3. The unmixing effect of PG-NLWSU is significantly better than that of other algorithms, indicating that the nonlocal weighting matrix improves the performance of the algorithm.
From the results in Table 3, PG-NLWSU algorithm can provide higher SRE, P s , sparsity, and RMSE results than that of other comparison algorithms at different noise levels especially for the worse noise scenario such as 30 dB, which means that PG-NLWSU has high robustness to noises.
According to the results in Table 4, the results of SRE, sparsity, and RMSE under all noise intensity show that the PG-NLWSU can generate sparser unmixing accuracy and than other algorithms primarily because DC2 maintains relatively uniform spatial distribution, indicating that the nonlocal weighted model effectively promotes the sparseness of the algorithm. For P s , PG-NLWSU still captures the best result especially at 30 dB, which implies that the proposed PG-NLWSU algorithm is very stable.
The computational efficiency of all algorithms on DC1 and DC2 is shown in Table 5, and bold represents the best experimental results. SUnSAL takes the least time. NCLS-TV and SUnSAL-TV run slower because they must extract neighborhood information of images, whereas CLSUnSAL and PG-NLWSU run longer, but when the SNR increases, PG-NLWSU shows higher calculational efficiency.

Experiment on Real Dataset
We experimentally evaluate the algorithm on the AVIRIS Cuprite dataset, whose information was collected from a copper mine in Nevada and is widely used to verify spectral unmixing  algorithms. Figure 8 shows the mineral distribution map drawn by USGS in 1995. The band range is 0.4 to 2.5 μm. A 250 × 191-pixel subregion with 188 bands is considered, where noisy bands 1 and 2 and water-absorption bands 104 to 113 and 148 to 167 are removed from the original 224 bands. The spectral library A ∈ R 188×498 also processed and reserved 188 bands of the USGS spectral library.   Figure 9 shows the Cuprite data cube and spectral library A 1 . Since no real abundance map can be used, the experimental results of this algorithm are compared with those of SUnSAL, CLSUnSAL, NCLS-TV, and SUnSAL-TV, with results as shown in Fig. 10. Figure 10 compares the estimated abundances of three minerals (alunite, buddingtonite, and chalcedony) in the cuprite area by SUnSAL, NCLS-TV, CLSUnSAL, SUnSAL-TV, and PG-NLWSU. We set the parameters of SUnSAL, NCLS-TV, CLSUnSAL, and PG-NLWSU to λ ¼ 1e − 3, λ TV ¼ 3e − 9, λ ¼ 9e − 2 and λ ¼ 9e − 5, respectively, and set the parameters of SUnSAL-TV to λ ¼ 1e − 3, λ TV ¼ 3e − 9. The abundance graph of buddingtonite clearly shows the performance of different algorithms in dealing with noise, whereas chalcedony shows the algorithm's more detailed unmixing ability. It can be seen that the abundance graph of PG-NLWSU performs better than those of the other algorithms.

Conclusions
SU is a classical method to solve the problem of spectral unmixing. To use a weighted l 1 norm to enhance the sparsity of the abundance matrix is a current research hotspot. We propose a nonlocal weighted sparse unmixing algorithm (PG-NLWSU) based on global search and parallel acceleration. The nonlocal mean is introduced to the sparse decomposition model through a weighting factor, for a new method to utilize the nonlocal information of the image. To fully use this nonlocality, we use a search box of the same size as the image, and parallel computing improves the computing efficiency. Experimental results show that the proposed algorithm performs better in images with abundant nonlocal information. In future work, we will consider the texture information of the image and use it to eliminate the smoothing effect caused by the use of neighborhood or nonlocal information.

Appendix A: Model Solution
Equation (15) can be written as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 0 ; 1 1 6 ; 6 5 6 min U;V gðVÞ s:t: GU þ BV ¼ 0; (20) where E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 1 ; 1 1 6 ; 6 1 0 Equation (20) is solved by the ADMM algorithm, and its augmented LaGrange function is as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 2 ; 1 1 6 ; 4 8 3 LðU; V; DÞ ≡ gðU; VÞ þ μ 2 kGU þ BV − Dk 2 F ; where μ > 0, D μ is the LaGrange multiplier. Equation (22) is expanded as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 3 ; 1 1 6 ; 4 1 9 In Eq. (23), the variables need to be separated during the solution process. When solving for a variable (e.g., U), other variables default to fixed values (e.g., V, D). We separately solve the three variables in the above formula, and the optimization function is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 4 ; 1 1 6 ; 2 9 7 U tþ1 ← arg min The matrix derivation solution is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 5 ; 1 1 6 ; 2 4 7 The optimization function of V 1 is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 6 ; 1 1 6 ; 2 0 4 V tþ1 1 ← arg min its solution is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 7 ; 1 1 6 ; 1 5 0 The optimization function of V 2 is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 8 ; 1 1 6 ; 9 6 V tþ1 2 ← arg min with solution E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 9 ; 1 1 6 ; 7 2 3 where softða; bÞ is a soft threshold function, expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 3 0 ; 1 1 6 ; 6 6 7 softða; bÞ ≡ signðaÞ maxfjaj − b; 0g; (30) where signðaÞ is a sign function E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 6 ; 1 1 6 ; 6 2 4 1; a > 0 −1; a < 0 : The optimization function of V 3 is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 3 1 ; 1 1 6 ; 5 6 6 V tþ1 since the actual role of ι Rþ is to remove the nonnegative value of the solution, the solution of V 3 is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 3 2 ; 1 1 6 ; 5 0 1 V tþ1 3 ← maxðU t − D t 3 ; 0Þ: (32)