No-reference remote sensing image quality assessment based on gradient-weighted natural scene statistics in spatial domain

Abstract. Considering the relatively poor real-time performance when extracting transform-domain image features and the insufficiency of spatial domain features extraction, a no-reference remote sensing image quality assessment method based on gradient-weighted spatial natural scene statistics is proposed. A 36-dimensional image feature vector is constructed by extracting the local normalized luminance features and the gradient-weighted local binary pattern features of local normalized luminance map in three scales. First, a support vector machine classifier is obtained by learning the relationship between image features and distortion types. Then based on the support vector machine classifier, the support vector regression scorer is obtained by learning the relationship between image features and image quality scores. A series of comparative experiments were carried out in the optics remote sensing image database, the LIVE database, the LIVEMD database, and the TID2013 database, respectively. Experimental results show the high accuracy of distinguishing distortion types, the high consistency with subjective scores, and the high robustness of the method for remote sensing images. In addition, experiments also show the independence for the database and the relatively high operation efficiency of this method.


Introduction
Optical remote sensing imaging is widely applied in many aspects such as weather forecast, environmental monitoring, resource detection, and military investigation.The quality of remote sensing images can be affected by various factors in the imaging procedure.Blur can be caused by the atmospheric environment and defocus of the sensor.The noises such as photon noise and shot noise can be introduced to the image in the photoelectric sampling process.Block effect tend to generate in the process of compression transmission.These factors degrade the remote sensing images and negatively affect their practical applications.In view of the fact that perfect reference images are usually unavailable in practice, the no-reference image quality assessment (NR-IQA) is of high value in research and practical applications.
In the image quality assessment field, natural scene statistics (NSS) is widely used in NR-IQA.The NSS-based algorithms can effectively evaluate image quality.Moorthy and Bovik 1 proposed a blind image quality index (BIQI), which extracts NSS features using two-step framework.The framework consists of support vector machine (SVM)based distortion type classification and support vector regression (SVR)-based quality prediction.The final quality score is obtained by probabilistic weighting.BIQI only extracts features in wavelet domain, spatial domain features are not under consideration.Saad et al. 2 proposed blind image integrite notator using DCT statistics (BLIINDS-II), which extracts NSS features in discrete cosine transform (DCT) domain and calculate the quality score based on Bayesian model.The BLIINDS-II has a better performance comparing with the BIQI, but the real-time performance is relatively poor due to DCT transformation.Liu et al. 3 proposed spatial-spectral entropy-based quality (SSEQ) assessment method, which extracts NSS features of entropy in spatial and DCT domain.Comparing with BLIINDS-II, SSEQ has higher real-time performance.However, SSEQ spends a lot of time on extracting features.Mittal et al. 4,5 proposed blind/ referenceless image spatial quality evaluator (BRISQUE), which extracts local and adjacent normalized luminance features.The SVR is used to calculate the quality score.BRISQUE performs well and has high real-time performance.However, the orientation information used in the BRISQUE does not fully express the structure features of the image.Li et al. 6 proposed a no-reference quality assessment using statistical structural and luminance features (NRSL), which extracts local normalized luminance features and local binary pattern (LBP) features of local normalized luminance map to build the NR model.NRSL has high consistency between predicted scores and subjective scores.However, the contrast features that are closely related to the human visual system (HVS) are not extracted.Liu et al. 7 proposed oriented gradients image quality assessment (OGIQA), which extracts the gradient feature and uses the AdaBoosting_BP to obtain the quality score.OGIQA performs well, yet its applicability to remote sensing images remains to be tested and verified.
Considering the relatively poor real-time performance when extracting transform-domain image features and the insufficiency of spatial domain features extraction, a noreference remote sensing image quality assessment method based on gradient-weighted spatial natural scene statistics (GWNSS) is proposed in this paper.The feature vector of remote sensing image is constructed by extracting local normalized luminance features and gradient-weighted LBP features of local normalized luminance map in three scales.A two-step framework based on SVM is then used to obtain the relationship between features and distortion types as well as quality scores.
2 Space-Domain NSS Feature Extraction High-quality natural images have regular statistical properties, and the distortions can alter the image structure as well as the statistical properties.Thus the type and degree of distortion can be characterized by the changes in statistical properties.Ruderman 8 found that the nonlinear operation of local normalized for the image has a decorrelating effect.They established an NSS model based on the local normalized luminance map.The contents of remote sensing images are natural scenes, so they have regular statistical characteristics as natural image.Thus the similar NSS model and feature extraction method can be used for remote sensing images.However, remote sensing image is richer in texture compared with ordinary natural image, 9 so the method suitable for ordinary natural images may not suitable for remote sensing images.Thus the algorithm should be improved according to the characteristics of remote sensing images.In this paper, for remote sensing images, the proposed method extracts local normalized luminance features F1 and gradient-weighted LBP features F2 of local normalized luminance map in three scales to construct a 36-dimensional (36-D) image feature vector

Local Normalized Luminance Features
Local normalized luminance can be used as a preprocessing stage to emulate the nonlinear masking of visual perception in many image processing applications.Due to the rich texture and complex image structural information of remote sensing images, local rather than global normalized luminance can reduce the loss of image structure information.Therefore, in this paper, the local normalized luminance map is first determined, and then the local normalized luminance features are extracted.Fig. 2 The histograms of local-normalized luminance maps for images in Fig. 1 in the first scale.
used to distinguish the different degrees of distortion of remote sensing images.As shown in Fig. 3, taking WN as an example, a reference image and the five corresponding distorted images with different degrees of distortion are randomly taken from the ORSID.The first-scale local normalized luminance histograms of the remote sensing images are shown in Fig. 4. It is shown that with the degree of distortion increasing (higher DMOS value), the peak value of the histogram becomes lower and the curve becomes flatter.Thus the histogram distribution of local normalized luminance can be used as an indicator of the degree of distortion for remote sensing images.

Extracting image local normalized luminance features
For an image Iðx; yÞ whose size is M × N, after local normalized operation of size ð2K þ 1Þ × ð2L þ 1Þ, the normalized luminance at pixel ði; jÞ is defined as 4,5 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 3 2 6 ; 3 7 4 The normalized luminance histogram distribution of images can be fitted with a generalized Gaussian distribution (GGD) with mean of zero. 8The zero-mean GGD model is expressed as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 3 2 6 ; 2 8 5 The parameters α and σ of GGD can represent the distribution, therefore α and σ of the normalized luminance histogram distribution can represent the character of normalized luminance.After extracting the normalized luminance map, the BRISQUE method extracts the features using ordinary moment.In remote sensing images, there are various scenes with different terrain characteristics and image structures.L-moments can be defined for any random variable whose mean exists, and being linear functions of the data, it suffers less from the effects of sampling variability.2][13] So L-moments are used to enhance the robustness for image quality assessment. 14Considering these reasons, L-moments estimation is used in this paper to enhance the robustness of the proposed method comparing with that of BRISQUE.On the one hand, L-moments Fig. 4 The histograms of local-normalized luminance map for images in Fig. 3 in the first scale.
estimation is insensitive to different scenes of remote sensing images, and thus robust to parameter estimation of scenes.On the other hand, L-moments estimation is sensitive to the distortion of different scenes in distorted remote sensing images, and thus can be used for parameters estimation of different distortion degrees.For an image normalized luminance histogram X i , i ¼ 1; 2; : : : ; n, the first four L-moments can be expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 6 3 ; E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 6 3 ; 5 where b r denotes the r'th probability-weighted moment and can be expressed as ; t e m p : i n t r a l i n k -; e 0 0 7 ; 6 3 ; 4 2 9 b 0 ¼ E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 8 ; 6 3 ; 3 8 8 b r ¼ The parameter L 1 and L 3 are zero due to the symmetry of GGD.Thus in this paper, L 2 and L 4 are used to characterize the distribution of local normalized luminance, yielding local normalized luminance features.For a distorted image, there are six local normalized luminance parameters can be extracted in three scales.A six-dimensional (6-D) vector consists of six parameters, i.e., F1 ¼ ðf 1 ; f 2 ; : : : ; f 6 Þ.The meanings of the elements in this 6-D vector are shown in Table 1.

Gradient-Weighted LBP Features of Local
Normalized Luminance Map The surface of the earth has obvious spatial characteristics, which can be represented by texture in remote sensing images.Thus remote sensing images usually have more structural information than ordinary natural images.The LBP patterns can effectively express image structural features, such as edges, lines, corners, and spots.The LBP map can be obtained by processing the local normalized luminance map using rotation invariant LBP operator.On the LBP map, the value 0 stands for bright spot in the distorted image, the value 8 stands for flat area or dark spot in the distorted image, the value (1 to 7) stands for edges of different curvature. 15Based on the assumption that local normalized luminance features and LBP features of local normalized luminance map is independent, 15,16 the combination of the two kinds of features can improve the effectiveness of image quality assessment.However, LBP can reflect the structural information while the histogram of local normalized luminance reflecting statistical distribution of image luminance.Neither of the two can characterize the contrast information of the image.Considering the high sensitivity of contrast in HVS, contrast information is extracted by weighing the LBP features of local normalized luminance map using gradient.The gradient-weighted LBP features of local normalized luminance map can express both structural features and local contrast features of images, thus the method can be better applied to remote sensing images with complex structural information.

Determination of image distortion type based on gradient-weighted LBP features of local normalized luminance map
There exist regular natural scenes statistics characteristics in remote sensing images and natural images.The changes of histogram distribution of gradient-weighted LBP in local normalized luminance map can be used to distinguish the distortion types of natural images. 17According to the above two points, our experiments verified that the distortion types of remote sensing images can be distinguished by the changes of histogram distribution of gradient-weighted LBP in local normalized luminance map.Using the reference image and three different types distorted images in Fig. 1 as input, the gradient-weighted LBP histograms of local normalized luminance map in the first scale are shown in Fig. 5.
Figure 5 shows that the LBP histogram distribution of JP2K image is high in the middle and low on both sides.This attributes to the block effect the JP2K caused, which makes flat areas become edges, i.e., the statistical probability of pixels with LBP values of 2 to 6 significantly increases.On the contrary, the LBP histogram distribution curve of WN is low in the middle and high on both sides due to the fact that WN can increase the bright and dark spots on the image.BLUR distortion can make the data tend to uniformity.This is due to though there is reduction of the number of bright and dark spots, the statistical probability of edge points is not changed significantly.The above three types of distortion can be distinguished clearly using gradient-weighted LBP histograms of local normalized luminance map.Thus it can be concluded that the histogram distribution of gradient-weighted LBP of local normalized luminance map can be used as an indicator to distinguish the distortion types of remote sensing images.

Determination of image distortion degree based on image gradient-weighted LBP features of local normalized luminance map
It is verified that the changes of gradient-weighted LBP histogram distribution of local normalized luminance can be used to distinguish the different degrees of remote sensing image distortion according to our experiments.As shown in Fig. 6, taking JP2K distortion as an example, a reference image and the corresponding five different degrees JP2K distorted images are randomly taken from the ORSID database.The first scale local normalized luminance histograms of the images are shown in Fig. 7.With the degree of JP2K distortion increasing (higher DMOS), the blocking artifact becomes severer, the flat areas in the image become edges, the statistical probability of pixels with LBP values of 8 decreases, and the statistical probability of pixels with LBP values of 2 to 6 increases.At the same time, with the increasing severity of JP2K distortion, the blur distortion introduced by the block effect exacerbates the decrease in the statistical probability of pixels with LBP values of 1 and 8. Thus it can be concluded that the gradientweighted LBP histogram distribution of local normalized luminance map can reflect the distortion degrees of JP2K images.

Extracting image gradient-weighted LBP features elements of local normalized luminance map
LBP operation is performed on the local normalized luminance map, which is obtained according to Eq. ( 1).The local rotation invariant uniform LBP value is defined as 17 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 3 2 6 ; 1 1 3 LBP riu2 J;R ði; jÞ ¼  After rotation invariant uniform LBP operation of Eq. ( 9), there are J þ 2 different values in LBP map, that is, 0; 1; : : : ; J þ 1.The rotation-invariant LBP feature can express the detailed image structure information so as to better distinguish the difference between the central pixel and the surrounding pixel.Thus it is suitable for the remote sensing image with complex structure information.
The eye is more sensitive to image features with higher contrast, and gradient can characterize image contrast information, so gradient can be used to weight LBP histogram of local normalized luminance map.The operation of gradient weighted can distinguish the degree of difference between the center pixel and the surrounding pixels.The gradientweighted LBP histogram is calculated by accumulating the gradient of pixels with the same LBP value E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 0 ; 6 3 ; 3 6 0 hðkÞ ¼ X M i¼0 X N j¼0 fI 0 ði; jÞ • g½LBP riu2 J;R ði; jÞ; kg; where I 0 ði; jÞ is the gradient at pixel ði; jÞ, k ∈ 0; 1; : : : In this paper, the number of neighboring pixel J is 8, and the radius of the neighborhood R is 1, so there are 10 different values in the LBP map.Thus the gradient-weighted LBP features can be represented by gradient-weighted statistical probabilities of these 10 values.The parameters are extracted in three scales.So the 30-dimensional (30-D) vector of each image can be denoted as F2 ¼ ðf 7 ; f 8 ; : : : ; f 36 Þ.The meanings of the elements in this 30-D vector are shown in Table 2.

Method of No-Reference Image Quality
Assessment Based on SVM The proposed method extracts F1 and F2 of the local normalized luminance map from known distorted images.Then the corresponding feature matrix is constructed.The feature matrix and the distortion types of known distorted images are used to train a SVM classifier to determine the image distortion type and the probability of different distortion types.Based on the SVM classifier, the feature matrix and subjective scores are used to train SVM scorer to determine the image distortion degree.The local normalized luminance features F1 and the gradient-weighted LBP features F2 of the to-be-evaluated distorted images are then extracted in the same way.The constructed feature matrix of the to-beevaluated distorted image is then entered into the trained SVM model to derive the distortion type and the objective score.

SVM Image Distortion Classification Algorithm
SVM is widely applied to learn the mapping function between the feature space and quality measure. 4,6For a training set fF train ; Z train g, F train is the image feature matrix of the training set, Z train is the distortion type matrix of the training set, Zk train is its k'th row vector, representing the distortion type of the k'th image in the training set E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 3 2 6 ; 1 5 4

Zk
Given parameters C > 0 and ϵ > 0, the standard form of SVM is represented as The corresponding constraint conditions are as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 4 ; 6 3 ; E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 5 ; 6 3 ; 6 6 8 ξ k ; ξ Ã k ≥ 0; k ¼ 1;2; : : : ; K l ðI ∈ f1; 2; 3gÞ; (15)   where ω represents the matrix that needs to be trained, and b is a constant of 1.The radial basis function kernel Taking training set fF train ; Z train g as the input of SVM classifier.The constructed image feature matrix F test of the test set is entered into the trained SVM classifier to obtain the distortion type matrix T p of the test set images and the determination probability T ¼ ðT 1 ; T 2 ; T 3 Þ of each type of distortion.

SVR Image Quality Score Algorithm
The SVR image quality score algorithm is basically the same as the SVM image distortion classification algorithm mentioned above except for the form of the input and the output.Taking fF 1 train ; Z 1 train g, fF 2 train ; Z 2 train g, fF 3 train ; Z 3 train g as input, the three SVR scorers for JP2K, WN and BLUR distortion are trained, respectively.After obtaining the trained SVR scorers, the constructed image feature matrix F test in the test set is entered into the trained SVR scorers to obtain objective quality scores S ¼ ðS 1 ; S 2 ; S 3 Þ of each type of distortion, the image quality objective quality score S p is obtained using weighted probability of distortion type.

Experimental Results and Analysis
To illustrate the subjective consistency of the proposed GWNSS method, experiments of the proposed GWNSS and other existing IQA methods are performed on the ORSID database, 10 the LIVE database, 18,19 and the LIVEMD database, 20 respectively.The subjective consistency performance of GWNSS is verified by four indices, which are rootmean-squared error (RMSE), Pearson linear correlation coefficient (PLCC), Spearman rank order correlation coefficient (SROCC), and Kendall rank order correlation coefficient (KROCC).In order to verify that the performance of GWNSS is not restricted to a specific database, the database independence experiments are performed on the LIVE and TID2013 database, 21 and SROCC is used as the evaluation index.All experiments were performed on a Lenovo desktop computer, which has an Intel core i3-2130 processor with 4 GB memory and 3.4G frequency.The operating system is win7, and the experimental platform is MATLAB R2015a.

Comparison of GWNSS Performance in
One-Step and Two-Steps Framework In this paper, a one-step framework, which is similar to that proposed in Ref. 3, is also investigated.In this approach, the features extraction is the same as the two-steps framework.Instead of using SVM classifier and SVR scorer, the one-step framework directly constructs the SVR scorer using all distorted image feature matrix and subjective score matrix in the training set as training data.As shown in Table 3, SROCC of one-step GWNSS is slightly lower than that of the two-steps GWNSS.The reason is that under the twosteps framework, different parameters can be selected for each SVM scorer for different distortion types.Thus each SVR scorer can more accurately predict the corresponding distortion type.However, under the one-step framework, the parameter that the SVR scorer selected is an excellent parameter for all types of distorted images in the training set instead of optimum parameter for specific distortion type.

Comparison of Subjective Consistency with
Other Objective IQA Methods in the ORSID Database The subjective consistency performance of the four FR-IQA methods [peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), 22 feature similarity index (FSIM), 23 and visual information fidelity (VIF) 24 ] and the six NR-IQA methods [BLIINDS-II, 2 BRISQUE, 4,5 SSEQ, 3 blind image quality assessment metric based on high order derivatives (BHOD), 25 blind image quality assessment (BIQA), 26 and NRSL 6 ] for images of three distortion types in the ORSID database are shown in Table 4.The performance of the GWNSS is compared with those of the abovementioned 10 IQA methods.The subjective consistency performance is assessed by four indices, which are SROCC, PLCC, KROCC, and RMSE.The experiments are repeated 1000 times to obtain the median of the subjective consistency performance.In Table 4, the top three correlation indices within each distortion category are marked in bold and the best correlation indices are highlighted with the standard red color.
Table 4 shows that the proposed GWNSS and the state-ofthe-art methods NRSL and BIQA have high subjective consistency.The performance of the 11 methods for 3 types of distorted images is evaluated by 4 correlation coefficient indices, yielding 12 indices for per method.The proposed GWNSS method has 12 indices in the top 3 and 8 indices in the top 1.BIQA and NRSL have 8 and 7 out of 12 indices in the top 3, respectively.Taking all distortion images in the ORSID database together, all four correlation coefficient indices of the proposed GWNSS method are the best among all IQA methods.The proposed GWNSS method achieves good assessment results for all types of distortion and thus exhibits high robustness for different distortions.The proposed GWNSS, even when compared with the FR-IQA methods, still shows relatively high subjective consistency.The performance of GWNSS is superior to PSNR, SSIM, FSIM, and VIF methods.
The scatter plots of the subjective and objective consistency scores of four well-performing methods, which are GWNSS, BRISQUE, NRSL, and BIQA, are shown in Fig. 8.The x axis denotes the objective score obtained by the image quality assessment method and the y axis denotes the subjective score obtained by human eyes.Figure 8 shows that the scatter points of GWNSS, BRISQUE, NRSL, and BIQA are concentrated close to the fitting curves, indicating high objective-subjective consistency.

Comparison of Subjective Consistency with
Other Objective IQA Methods in the LIVE Database and the LIVEMD Database.There are 29 different reference images and 779 distorted images in the LIVE database.The distortion types include JP2K, JPEG, WN, BLUR, and fast fading (FF), and the subjective DMOS of distorted images are given as well.There are 15 different reference images and 450 multiply distorted images in the LIVEMD database.The distortion types include BLUR followed by JPEG (BJ) and BLUR followed by noise (BN).The subjective DMOS of multiply distorted images are given as well.
The subjective consistency performance of the four FR-IQA methods (PSNR, SSIM, 22 FSIM, 23 and VIF 24 ), the six NR-IQA methods (BLIINDS-II, 2 BRISQUE, 4,5 SSEQ, 3 BHOD, 25 BIQA, 26 and NRSL 6 ), and one deep learningbased method on the use of deep learning for blind IQA (DeepBIQ) 27 in the LIVE database is shown in Table 5.The performance indices of these methods in the LIVEMD database are shown in Table 6.80% of all distorted images are randomly selected as the training set and 20% as the test set.The above experiments are repeated 1000 times to obtain the median of the subjective consistency performance.
Tables 5 and 6 show that the proposed GWNSS method has high subjective consistency.The performance of the 11 methods for 5 types of distorted images in the LIVE database is evaluated by 4 correlation coefficient indices, yielding 20 indices for per method.The proposed GWNSS has 16 out of 20 indices in the top 3 of respective distortion categories.Taking all distortion images in the LIVEMD database together, all four correlation coefficient indices of the proposed GWNSS method are the best among all IQA methods.The proposed GWNSS method, even when compared with the FR-IQA methods, still shows relatively high subjective consistency.The performance of GWNSS is superior to PSNR method, close to SSIM, FSIM, and VIF methods in the LIVE database and it is superior to PSNR, SSIM, FSIM, and VIF methods in the LIVEMD database.Taking all distortion images in the LIVE database together, KROCC and RMSE of the proposed GWNSS method are the best among all IQA methods.When compared with deep learning-based method DeepBIQ, SROCC and PLCC are merely 0.01 less than DeepBIQ.The reason is that the features extracted by CNN-based method are sufficient, leading to a good performance.However, GWNSS is more efficient than DeepBIQ, which can efficiently extract features and conduct training.In addition, the GWNSS has low requirement for hardware and can be used in wider applications.
The scatter plots of the subjective and objective consistency scores of GWNSS, BRISQUE, NRSL, and BIQA methods are shown in Fig. 9.The x axis denotes the objective score obtained by the image quality assessment method and the y axis denotes the subjective score obtained by human eyes.Figure 9 shows that the scatter points of the above four NR-IQA methods are concentrated close to the fitting curves, indicating high objective-subjective consistency.

Database Independence Experiments
To verify that the performance of GWNSS is not restricted to the particular database used, database independence experiments are performed on the LIVE database and the TID2013 database. 21In the TID2013 database, the selected images for independence experiments are 24 different reference images and 480 distorted images with the same 4 common distortion categories: JP2K, JPEG, WN and BLUR.Distorted images in the LIVE database are used to train an SVM model, and then distorted images, which are selected in the TID2013 database, are tested in the trained model.The SROCC is used as the testing index.The subjective consistency performance of the four FR-IQA methods (PSNR, SSIM, 22 FSIM, 23 and VIF 24 ) and the six NR-IQA methods (BLIINDS-II, 2 BRISQUE, 4,5 SSEQ, 3 BHOD, 25 BIQA, 26 and NRSL 6 ) for images of four different distortion types in the TID2013 database are shown in Table 7. Conversely, distorted images in the TID2013 database are used to train   spent for feature extraction of all images in the ORSID database of the five good performance NR-IQA methods (BLIINDS-II, 2 BRISQUE, 4,5 SSEQ, 3 BIQA, 26 and NRSL 6 ) and GWNSS are shown in Table 10.Table 10 shows that the mean time spent by the proposed GWNSS method is far less than that of SSEQ and BLIINDS-II.On average, the proposed GWNSS method only spent 0.1790 s more than that of the BRISQUE method and 0.2114 s more than that of the BIQA method.Thus the proposed GWNSS method has high evaluation accuracy and operation efficiency.

Conclusion
In this paper, a 36-D image feature vector consists of the local normalized luminance features and the gradientweighted LBP features of local normalized luminance map in three scales.First, the feature matrix and the corresponding distortion type are used to train the SVM classifier.
Then on the basis of the SVM classifier, the feature matrix and the corresponding DMOS are used to train the SVR scorer.A series of comparative experiments were carried out in the ORSID database, the MDORSID database, the LIVE database, the LIVEMD database, and the TID2013 database, respectively.Experimental results show that the proposed method has high accuracy in distortion type classification of remote sensing images, high consistency with subjective scores, and high robustness for different types of distortions.In addition, the efficacy of the proposed method is not restricted to a particular database and the operation efficiency is high.The research of this paper mainly   focuses on single-distorted images.Assessment of multiply distorted images, which is of more practical significance, will be addressed in the future research.

2. 1 . 1
Determination of image distortion type based on image local normalized luminance features Remote sensing image and natural image both exhibit the regular natural scenes statistics characteristics.According to the literature, 4,5 distortion types of natural images can be distinguished by changes of the histogram distribution of local normalized luminance.Starting from the two points discussed above, our experiments verified that the distortion type of remote sensing images can be distinguished by the change of the histogram distribution of local normalized luminance.As shown in Fig.1, an reference image and the corresponding three different types distorted images with similar difference mean opinion scores (DMOS) are randomly selected from optics remote sensing image database (ORSID).10The distortion types include JP2K compression, Gaussian white noise (WN), and Gaussian Blur (BLUR).Local normalized luminance features are extracted from the four images in three scales.The histograms of local normalized luminance maps in the first scale are shown in Fig.2.Figure2shows that the histogram distributions of local normalized luminance for different types of distorted images are different.The peak values of the histogram are different with different kinds of distortion.For JP2K and BLUR, though their distribution curves are similar in the overall shape, they have different degrees in increasing of the peak value.Different from the higher peak values JP2K and BLUR are, WN leads to a lower peak value and more flat curve than the reference image.The above three types of distortion can be distinguished by histogram of local normalized luminance.Thus the differences in the histogram distribution of local normalized luminance can reflect differences in distortion types of remote sensing images.

Fig. 5
Fig. 5 Gradient-weighted LBP histograms of local-normalized luminance map for images in Fig. 1 in the first scale.

5 Fig. 7
Fig.7Gradient-weighted LBP histograms of local-normalized luminance map for images in Fig.6in the first scale.

Fig. 8
Fig.8Scatter plots of the subjective and objective consistency scores of GWNSS, BRISQUE, NRSL, and BIQA methods in the ORSID database.

Fig. 10
Fig. 10 Accuracy of the distortion type judgment of the GWNSS method in the ORSID database.

Table 1
Meanings of image local-normalized luminance feature vector elements.

Table 2
Meanings of gradient-weighted LBP feature vector elements of local normalized luminance map.

Table 3
The subjective consistency comparison of the GWNSS methods under the one-step framework and under the two-step frameworks for all distorted images in the ORSID database.

Table 4
Comparison of the subjective consistency of different IQA methods in the ORSID database.

Table 5
Comparison of the subjective consistency of different NR-IQA methods in the LIVE database.Yan et al.: No-reference remote sensing image quality assessment. . .

Table 5 (
Continued).The values in the rows of DeepBIQ are experimental results in the original paper.The original paper does not give the scores for specific distortion, only gives the overall scores of SROCC and PLCC for all distorted images in the LIVE database. a

Table 6
Comparison of the subjective consistency of different NR-IQA methods in the LIVEMD database.

Table 7
Comparison of the subjective consistency of different NR-IQA methods in the LIVE database (training set) and the TID2013 database (test set).

Table 8
Comparison of the subjective consistency of different NR-IQA methods in the LIVE database (test set) and the TID2013 database (training set).

Table 9
Accuracy of the distortion type judgment of the GWNSS method in the ORSID database.

Table 10
Mean time spent extracting all images features by different NR-IQA methods in the ORSID database.