Open Access
12 February 2019 No-reference remote sensing image quality assessment based on gradient-weighted natural scene statistics in spatial domain
Junhua Yan, Xuehan Bai, Yongqi Xiao, Yin Zhang, Xiangyang Lv
Author Affiliations +
Abstract
Considering the relatively poor real-time performance when extracting transform-domain image features and the insufficiency of spatial domain features extraction, a no-reference remote sensing image quality assessment method based on gradient-weighted spatial natural scene statistics is proposed. A 36-dimensional image feature vector is constructed by extracting the local normalized luminance features and the gradient-weighted local binary pattern features of local normalized luminance map in three scales. First, a support vector machine classifier is obtained by learning the relationship between image features and distortion types. Then based on the support vector machine classifier, the support vector regression scorer is obtained by learning the relationship between image features and image quality scores. A series of comparative experiments were carried out in the optics remote sensing image database, the LIVE database, the LIVEMD database, and the TID2013 database, respectively. Experimental results show the high accuracy of distinguishing distortion types, the high consistency with subjective scores, and the high robustness of the method for remote sensing images. In addition, experiments also show the independence for the database and the relatively high operation efficiency of this method.

1.

Introduction

Optical remote sensing imaging is widely applied in many aspects such as weather forecast, environmental monitoring, resource detection, and military investigation. The quality of remote sensing images can be affected by various factors in the imaging procedure. Blur can be caused by the atmospheric environment and defocus of the sensor. The noises such as photon noise and shot noise can be introduced to the image in the photoelectric sampling process. Block effect tend to generate in the process of compression transmission. These factors degrade the remote sensing images and negatively affect their practical applications. In view of the fact that perfect reference images are usually unavailable in practice, the no-reference image quality assessment (NR-IQA) is of high value in research and practical applications.

In the image quality assessment field, natural scene statistics (NSS) is widely used in NR-IQA. The NSS-based algorithms can effectively evaluate image quality. Moorthy and Bovik1 proposed a blind image quality index (BIQI), which extracts NSS features using two-step framework. The framework consists of support vector machine (SVM)-based distortion type classification and support vector regression (SVR)-based quality prediction. The final quality score is obtained by probabilistic weighting. BIQI only extracts features in wavelet domain, spatial domain features are not under consideration. Saad et al.2 proposed blind image integrite notator using DCT statistics (BLIINDS-II), which extracts NSS features in discrete cosine transform (DCT) domain and calculate the quality score based on Bayesian model. The BLIINDS-II has a better performance comparing with the BIQI, but the real-time performance is relatively poor due to DCT transformation. Liu et al.3 proposed spatial–spectral entropy-based quality (SSEQ) assessment method, which extracts NSS features of entropy in spatial and DCT domain. Comparing with BLIINDS-II, SSEQ has higher real-time performance. However, SSEQ spends a lot of time on extracting features. Mittal et al.4,5 proposed blind/referenceless image spatial quality evaluator (BRISQUE), which extracts local and adjacent normalized luminance features. The SVR is used to calculate the quality score. BRISQUE performs well and has high real-time performance. However, the orientation information used in the BRISQUE does not fully express the structure features of the image. Li et al.6 proposed a no-reference quality assessment using statistical structural and luminance features (NRSL), which extracts local normalized luminance features and local binary pattern (LBP) features of local normalized luminance map to build the NR model. NRSL has high consistency between predicted scores and subjective scores. However, the contrast features that are closely related to the human visual system (HVS) are not extracted. Liu et al.7 proposed oriented gradients image quality assessment (OGIQA), which extracts the gradient feature and uses the AdaBoosting_BP to obtain the quality score. OGIQA performs well, yet its applicability to remote sensing images remains to be tested and verified.

Considering the relatively poor real-time performance when extracting transform-domain image features and the insufficiency of spatial domain features extraction, a no-reference remote sensing image quality assessment method based on gradient-weighted spatial natural scene statistics (GWNSS) is proposed in this paper. The feature vector of remote sensing image is constructed by extracting local normalized luminance features and gradient-weighted LBP features of local normalized luminance map in three scales. A two-step framework based on SVM is then used to obtain the relationship between features and distortion types as well as quality scores.

2.

Space-Domain NSS Feature Extraction

High-quality natural images have regular statistical properties, and the distortions can alter the image structure as well as the statistical properties. Thus the type and degree of distortion can be characterized by the changes in statistical properties. Ruderman8 found that the nonlinear operation of local normalized for the image has a decorrelating effect. They established an NSS model based on the local normalized luminance map. The contents of remote sensing images are natural scenes, so they have regular statistical characteristics as natural image. Thus the similar NSS model and feature extraction method can be used for remote sensing images. However, remote sensing image is richer in texture compared with ordinary natural image,9 so the method suitable for ordinary natural images may not suitable for remote sensing images. Thus the algorithm should be improved according to the characteristics of remote sensing images. In this paper, for remote sensing images, the proposed method extracts local normalized luminance features F1 and gradient-weighted LBP features F2 of local normalized luminance map in three scales to construct a 36-dimensional (36-D) image feature vector F=(F1,F2)T.

2.1.

Local Normalized Luminance Features

Local normalized luminance can be used as a preprocessing stage to emulate the nonlinear masking of visual perception in many image processing applications. Due to the rich texture and complex image structural information of remote sensing images, local rather than global normalized luminance can reduce the loss of image structure information. Therefore, in this paper, the local normalized luminance map is first determined, and then the local normalized luminance features are extracted.

2.1.1.

Determination of image distortion type based on image local normalized luminance features

Remote sensing image and natural image both exhibit the regular natural scenes statistics characteristics. According to the literature,4,5 distortion types of natural images can be distinguished by changes of the histogram distribution of local normalized luminance. Starting from the two points discussed above, our experiments verified that the distortion type of remote sensing images can be distinguished by the change of the histogram distribution of local normalized luminance. As shown in Fig. 1, an reference image and the corresponding three different types distorted images with similar difference mean opinion scores (DMOS) are randomly selected from optics remote sensing image database (ORSID).10 The distortion types include JP2K compression, Gaussian white noise (WN), and Gaussian Blur (BLUR). Local normalized luminance features are extracted from the four images in three scales. The histograms of local normalized luminance maps in the first scale are shown in Fig. 2.

Fig. 1

Reference images and the corresponding three different types of distorted images: (a) reference image; (b) JP2K, DMOS=72.01; (c) WN, DMOS=75.11; and (d) blur, DMOS=66.67.

JEI_28_1_013033_f001.png

Fig. 2

The histograms of local-normalized luminance maps for images in Fig. 1 in the first scale.

JEI_28_1_013033_f002.png

Figure 2 shows that the histogram distributions of local normalized luminance for different types of distorted images are different. The peak values of the histogram are different with different kinds of distortion. For JP2K and BLUR, though their distribution curves are similar in the overall shape, they have different degrees in increasing of the peak value. Different from the higher peak values JP2K and BLUR are, WN leads to a lower peak value and more flat curve than the reference image. The above three types of distortion can be distinguished by histogram of local normalized luminance. Thus the differences in the histogram distribution of local normalized luminance can reflect differences in distortion types of remote sensing images.

2.1.2.

Determination of image distortion degree based on image local normalized luminance features

It is verified by our experiments that the changes of histogram distribution of local normalized luminance can be used to distinguish the different degrees of distortion of remote sensing images. As shown in Fig. 3, taking WN as an example, a reference image and the five corresponding distorted images with different degrees of distortion are randomly taken from the ORSID. The first-scale local normalized luminance histograms of the remote sensing images are shown in Fig. 4. It is shown that with the degree of distortion increasing (higher DMOS value), the peak value of the histogram becomes lower and the curve becomes flatter. Thus the histogram distribution of local normalized luminance can be used as an indicator of the degree of distortion for remote sensing images.

Fig. 3

Reference image and the corresponding five different degrees of WN distorted images: (a) reference image, (b) DMOS=33.55, (c) DMOS=37.59, (d) DMOS=44.60, (e) DMOS=49.93, and (f) DMOS=61.94.

JEI_28_1_013033_f003.png

Fig. 4

The histograms of local-normalized luminance map for images in Fig. 3 in the first scale.

JEI_28_1_013033_f004.png

2.1.3.

Extracting image local normalized luminance features

For an image I(x,y) whose size is M×N, after local normalized operation of size (2K+1)×(2L+1), the normalized luminance at pixel (i,j) is defined as4,5

Eq. (1)

I(i,j)=I(i,j)μ(i,j)σ(i,j)+C.

The normalized luminance histogram distribution of images can be fitted with a generalized Gaussian distribution (GGD) with mean of zero.8 The zero-mean GGD model is expressed as follows:

Eq. (2)

f(x;α,σ2)=α2βΓ(1/α)exp[(|x|β)α].

The parameters α and σ of GGD can represent the distribution, therefore α and σ of the normalized luminance histogram distribution can represent the character of normalized luminance. After extracting the normalized luminance map, the BRISQUE method extracts the features using ordinary moment. In remote sensing images, there are various scenes with different terrain characteristics and image structures. L-moments can be defined for any random variable whose mean exists, and being linear functions of the data, it suffers less from the effects of sampling variability. Thus L-moments are more robust than conventional moments to outliers in the data.1113 So L-moments are used to enhance the robustness for image quality assessment.14 Considering these reasons, L-moments estimation is used in this paper to enhance the robustness of the proposed method comparing with that of BRISQUE. On the one hand, L-moments estimation is insensitive to different scenes of remote sensing images, and thus robust to parameter estimation of scenes. On the other hand, L-moments estimation is sensitive to the distortion of different scenes in distorted remote sensing images, and thus can be used for parameters estimation of different distortion degrees. For an image normalized luminance histogram Xi, i=1,2,,n, the first four L-moments can be expressed as

Eq. (3)

L1=b0,

Eq. (4)

L2=2b1b0,

Eq. (5)

L3=6b26b1+b0,

Eq. (6)

L4=20b330b2+12b1b0,
where br denotes the r’th probability-weighted moment and can be expressed as

Eq. (7)

b0=i=1nXin,

Eq. (8)

br=i=r+1n(i1)(i2)(ir)(n1)(n2)(nr)Xin.

The parameter L1 and L3 are zero due to the symmetry of GGD. Thus in this paper, L2 and L4 are used to characterize the distribution of local normalized luminance, yielding local normalized luminance features. For a distorted image, there are six local normalized luminance parameters can be extracted in three scales. A six-dimensional (6-D) vector consists of six parameters, i.e., F1=(f1,f2,,f6). The meanings of the elements in this 6-D vector are shown in Table 1.

Table 1

Meanings of image local-normalized luminance feature vector elements.

Vector elementsMeaning
f1f3The L2 linear moments in three scales
f4f6The L4 linear moments in three scales

2.2.

Gradient-Weighted LBP Features of Local Normalized Luminance Map

The surface of the earth has obvious spatial characteristics, which can be represented by texture in remote sensing images. Thus remote sensing images usually have more structural information than ordinary natural images. The LBP patterns can effectively express image structural features, such as edges, lines, corners, and spots. The LBP map can be obtained by processing the local normalized luminance map using rotation invariant LBP operator. On the LBP map, the value 0 stands for bright spot in the distorted image, the value 8 stands for flat area or dark spot in the distorted image, the value (1 to 7) stands for edges of different curvature.15 Based on the assumption that local normalized luminance features and LBP features of local normalized luminance map is independent,15,16 the combination of the two kinds of features can improve the effectiveness of image quality assessment. However, LBP can reflect the structural information while the histogram of local normalized luminance reflecting statistical distribution of image luminance. Neither of the two can characterize the contrast information of the image. Considering the high sensitivity of contrast in HVS, contrast information is extracted by weighing the LBP features of local normalized luminance map using gradient. The gradient-weighted LBP features of local normalized luminance map can express both structural features and local contrast features of images, thus the method can be better applied to remote sensing images with complex structural information.

2.2.1.

Determination of image distortion type based on gradient-weighted LBP features of local normalized luminance map

There exist regular natural scenes statistics characteristics in remote sensing images and natural images. The changes of histogram distribution of gradient-weighted LBP in local normalized luminance map can be used to distinguish the distortion types of natural images.17 According to the above two points, our experiments verified that the distortion types of remote sensing images can be distinguished by the changes of histogram distribution of gradient-weighted LBP in local normalized luminance map. Using the reference image and three different types distorted images in Fig. 1 as input, the gradient-weighted LBP histograms of local normalized luminance map in the first scale are shown in Fig. 5.

Fig. 5

Gradient-weighted LBP histograms of local-normalized luminance map for images in Fig. 1 in the first scale.

JEI_28_1_013033_f005.png

Figure 5 shows that the LBP histogram distribution of JP2K image is high in the middle and low on both sides. This attributes to the block effect the JP2K caused, which makes flat areas become edges, i.e., the statistical probability of pixels with LBP values of 2 to 6 significantly increases. On the contrary, the LBP histogram distribution curve of WN is low in the middle and high on both sides due to the fact that WN can increase the bright and dark spots on the image. BLUR distortion can make the data tend to uniformity. This is due to though there is reduction of the number of bright and dark spots, the statistical probability of edge points is not changed significantly. The above three types of distortion can be distinguished clearly using gradient-weighted LBP histograms of local normalized luminance map. Thus it can be concluded that the histogram distribution of gradient-weighted LBP of local normalized luminance map can be used as an indicator to distinguish the distortion types of remote sensing images.

2.2.2.

Determination of image distortion degree based on image gradient-weighted LBP features of local normalized luminance map

It is verified that the changes of gradient-weighted LBP histogram distribution of local normalized luminance can be used to distinguish the different degrees of remote sensing image distortion according to our experiments. As shown in Fig. 6, taking JP2K distortion as an example, a reference image and the corresponding five different degrees JP2K distorted images are randomly taken from the ORSID database. The first scale local normalized luminance histograms of the images are shown in Fig. 7. With the degree of JP2K distortion increasing (higher DMOS), the blocking artifact becomes severer, the flat areas in the image become edges, the statistical probability of pixels with LBP values of 8 decreases, and the statistical probability of pixels with LBP values of 2 to 6 increases. At the same time, with the increasing severity of JP2K distortion, the blur distortion introduced by the block effect exacerbates the decrease in the statistical probability of pixels with LBP values of 1 and 8. Thus it can be concluded that the gradient-weighted LBP histogram distribution of local normalized luminance map can reflect the distortion degrees of JP2K images.

Fig. 6

Reference image and the corresponding five different degrees JP2K distorted images: (a) reference image, (b) DMOS=36.58, (c) DMOS=41.81, (d) DMOS=48.63, (e) DMOS=62.73, and (f) DMOS=69.53.

JEI_28_1_013033_f006.png

Fig. 7

Gradient-weighted LBP histograms of local-normalized luminance map for images in Fig. 6 in the first scale.

JEI_28_1_013033_f007.png

2.2.3.

Extracting image gradient-weighted LBP features elements of local normalized luminance map

LBP operation is performed on the local normalized luminance map, which is obtained according to Eq. (1). The local rotation invariant uniform LBP value is defined as17

Eq. (9)

LBPJ,Rriu2(i,j)={t=0J1s(gtgc),if  u[LBPJ,R(i,j)]2J+1,else.

After rotation invariant uniform LBP operation of Eq. (9), there are J+2 different values in LBP map, that is, 0,1,,J+1. The rotation-invariant LBP feature can express the detailed image structure information so as to better distinguish the difference between the central pixel and the surrounding pixel. Thus it is suitable for the remote sensing image with complex structure information.

The eye is more sensitive to image features with higher contrast, and gradient can characterize image contrast information, so gradient can be used to weight LBP histogram of local normalized luminance map. The operation of gradient weighted can distinguish the degree of difference between the center pixel and the surrounding pixels. The gradient-weighted LBP histogram is calculated by accumulating the gradient of pixels with the same LBP value

Eq. (10)

h(k)=i=0Mj=0N{I(i,j)·g[LBPJ,Rriu2(i,j),k]},
where I(i,j) is the gradient at pixel (i,j), k0,1,,J+1 denotes possible LBP values

Eq. (11)

g(x1,x2)={1,x1=x20,otherwise.

In this paper, the number of neighboring pixel J is 8, and the radius of the neighborhood R is 1, so there are 10 different values in the LBP map. Thus the gradient-weighted LBP features can be represented by gradient-weighted statistical probabilities of these 10 values. The parameters are extracted in three scales. So the 30-dimensional (30-D) vector of each image can be denoted as F2=(f7,f8,,f36). The meanings of the elements in this 30-D vector are shown in Table 2.

Table 2

Meanings of gradient-weighted LBP feature vector elements of local normalized luminance map.

Vector elementsMeaning
f7f16The statistical probabilities of gradient-weighted LBP values of 0,1,…,9 in the first scale
f17f26The statistical probabilities of gradient-weighted LBP values of 0,1,…,9 in the second scale
f27f36The statistical probabilities of gradient-weighted LBP values of 0,1,…,9 in the third scale

3.

Method of No-Reference Image Quality Assessment Based on SVM

The proposed method extracts F1 and F2 of the local normalized luminance map from known distorted images. Then the corresponding feature matrix is constructed. The feature matrix and the distortion types of known distorted images are used to train a SVM classifier to determine the image distortion type and the probability of different distortion types. Based on the SVM classifier, the feature matrix and subjective scores are used to train SVM scorer to determine the image distortion degree. The local normalized luminance features F1 and the gradient-weighted LBP features F2 of the to-be-evaluated distorted images are then extracted in the same way. The constructed feature matrix of the to-be-evaluated distorted image is then entered into the trained SVM model to derive the distortion type and the objective score.

3.1.

SVM Image Distortion Classification Algorithm

SVM is widely applied to learn the mapping function between the feature space and quality measure.4,6 For a training set {Ftrain,Ztrain}, Ftrain is the image feature matrix of the training set, Ztrain is the distortion type matrix of the training set, Ztraink is its k’th row vector, representing the distortion type of the k’th image in the training set

Eq. (12)

Ztraink={(1),JP2K(2),WN(3),BLUR.

Given parameters C>0 and ϵ>0, the standard form of SVM is represented as

Eq. (13)

minω,b,ξ,ξ*12ωTω+C{k=1Kξk+k=1Kξk*}.

The corresponding constraint conditions are as follows:

Eq. (14)

(ϵ+ξ*)ωTϕ(Ftraini)+b|Ztraink|ϵ+ξ,

Eq. (15)

ξk,ξk*0,k=1,2,,Kl(I{1,2,3}),
where ω represents the matrix that needs to be trained, and b is a constant of 1. The radial basis function kernel KF(Ftraini,Ftrainj)=exp(γFtrainiFtrainj) is used to represent the kernel function KF(Ftraini,Ftrainj)=ϕ(Ftraini)Tϕ(Ftrainj).

Taking training set {Ftrain,Ztrain} as the input of SVM classifier. The constructed image feature matrix Ftest of the test set is entered into the trained SVM classifier to obtain the distortion type matrix Tp of the test set images and the determination probability T=(T1,T2,T3) of each type of distortion.

3.2.

SVR Image Quality Score Algorithm

The SVR image quality score algorithm is basically the same as the SVM image distortion classification algorithm mentioned above except for the form of the input and the output. Taking {Ftrain1,Ztrain1}, {Ftrain2,Ztrain2}, {Ftrain3,Ztrain3} as input, the three SVR scorers for JP2K, WN and BLUR distortion are trained, respectively. After obtaining the trained SVR scorers, the constructed image feature matrix Ftest in the test set is entered into the trained SVR scorers to obtain objective quality scores S=(S1,S2,S3) of each type of distortion, the image quality objective quality score Sp is obtained using weighted probability of distortion type.

4.

Experimental Results and Analysis

To illustrate the subjective consistency of the proposed GWNSS method, experiments of the proposed GWNSS and other existing IQA methods are performed on the ORSID database,10 the LIVE database,18,19 and the LIVEMD database,20 respectively. The subjective consistency performance of GWNSS is verified by four indices, which are root-mean-squared error (RMSE), Pearson linear correlation coefficient (PLCC), Spearman rank order correlation coefficient (SROCC), and Kendall rank order correlation coefficient (KROCC). In order to verify that the performance of GWNSS is not restricted to a specific database, the database independence experiments are performed on the LIVE and TID2013 database,21 and SROCC is used as the evaluation index. All experiments were performed on a Lenovo desktop computer, which has an Intel core i3-2130 processor with 4 GB memory and 3.4G frequency. The operating system is win7, and the experimental platform is MATLAB R2015a.

4.1.

Comparison of GWNSS Performance in One-Step and Two-Steps Framework

In this paper, a one-step framework, which is similar to that proposed in Ref. 3, is also investigated. In this approach, the features extraction is the same as the two-steps framework. Instead of using SVM classifier and SVR scorer, the one-step framework directly constructs the SVR scorer using all distorted image feature matrix and subjective score matrix in the training set as training data. As shown in Table 3, SROCC of one-step GWNSS is slightly lower than that of the two-steps GWNSS. The reason is that under the two-steps framework, different parameters can be selected for each SVM scorer for different distortion types. Thus each SVR scorer can more accurately predict the corresponding distortion type. However, under the one-step framework, the parameter that the SVR scorer selected is an excellent parameter for all types of distorted images in the training set instead of optimum parameter for specific distortion type.

Table 3

The subjective consistency comparison of the GWNSS methods under the one-step framework and under the two-step frameworks for all distorted images in the ORSID database.

JP2KWNBLURALL
GWNSS (one-step)0.93360.92780.94440.9385
GWNSS (two-step)0.95940.93380.96690.9429

4.2.

Comparison of Subjective Consistency with Other Objective IQA Methods in the ORSID Database

The subjective consistency performance of the four FR-IQA methods [peak signal-to-noise ratio (PSNR), structural similarity index (SSIM),22 feature similarity index (FSIM),23 and visual information fidelity (VIF)24] and the six NR-IQA methods [BLIINDS-II,2 BRISQUE,4,5 SSEQ,3 blind image quality assessment metric based on high order derivatives (BHOD),25 blind image quality assessment (BIQA),26 and NRSL6] for images of three distortion types in the ORSID database are shown in Table 4. The performance of the GWNSS is compared with those of the abovementioned 10 IQA methods. The subjective consistency performance is assessed by four indices, which are SROCC, PLCC, KROCC, and RMSE. The experiments are repeated 1000 times to obtain the median of the subjective consistency performance. In Table 4, the top three correlation indices within each distortion category are marked in bold and the best correlation indices are highlighted with the standard red color.

Table 4

Comparison of the subjective consistency of different IQA methods in the ORSID database.

Performance indicesMethodsJP2KWNBLURALL
SROCCPSNR0.81920.95410.68070.8012
SSIM0.90320.92440.84350.8765
FSIM0.94850.93670.90370.8819
VIF0.95870.95790.95870.9232
BLIINDS-II0.93380.90080.93830.9225
BRISQUE0.86170.95670.91730.9173
SSEQ0.90830.92180.89920.8641
BHOD0.87670.80450.92480.8331
BIQA0.93530.93380.95040.9334
NRSL0.91280.93530.94590.9280
GWNSS0.95940.93470.96690.9425
PLCCPSNR0.84270.95940.70030.8018
SSIM0.90600.92750.86490.8710
FSIM0.96160.93730.92740.8850
VIF0.97470.97060.97200.9253
BLIINDS-II0.95330.91240.94970.9275
BRISQUE0.89380.97470.93560.9217
SSEQ0.90830.92180.89920.8641
BHOD0.91500.79080.93760.8451
BIQA0.96640.93660.96150.9372
NRSL0.93450.95170.95130.9300
GWNSS0.97990.95730.97500.9489
KROCCPSNR0.61080.81580.48800.5967
SSIM0.73670.75190.65130.6788
FSIM0.81080.77970.71080.6815
VIF0.83160.82150.81840.7421
BLIINDS-II0.80000.73680.80000.7548
BRISQUE0.69470.84380.75790.7503
SSEQ0.75790.76840.72630.6734
BHOD0.71580.63160.77570.6407
BIQA0.80000.79240.83160.7763
NRSL0.75970.83070.81050.7627
GWNSS0.85260.80000.86320.7944
RMSEPSNR8.40295.28799.23547.8421
SSIM5.49925.02376.49356.4477
FSIM3.56384.68234.83916.1108
VIF2.90123.23433.04084.9755
BLIINDS-II3.91085.34944.02734.8948
BRISQUE5.74712.99434.55075.1060
SSEQ5.37024.68135.04986.3781
BHOD5.08978.52564.41507.3328
BIQA3.27944.57853.54944.5805
NRSL4.61414.05304.08274.9333
GWNSS2.55854.21272.87284.1035

Table 4 shows that the proposed GWNSS and the state-of-the-art methods NRSL and BIQA have high subjective consistency. The performance of the 11 methods for 3 types of distorted images is evaluated by 4 correlation coefficient indices, yielding 12 indices for per method. The proposed GWNSS method has 12 indices in the top 3 and 8 indices in the top 1. BIQA and NRSL have 8 and 7 out of 12 indices in the top 3, respectively. Taking all distortion images in the ORSID database together, all four correlation coefficient indices of the proposed GWNSS method are the best among all IQA methods. The proposed GWNSS method achieves good assessment results for all types of distortion and thus exhibits high robustness for different distortions. The proposed GWNSS, even when compared with the FR-IQA methods, still shows relatively high subjective consistency. The performance of GWNSS is superior to PSNR, SSIM, FSIM, and VIF methods.

The scatter plots of the subjective and objective consistency scores of four well-performing methods, which are GWNSS, BRISQUE, NRSL, and BIQA, are shown in Fig. 8. The x axis denotes the objective score obtained by the image quality assessment method and the y axis denotes the subjective score obtained by human eyes. Figure 8 shows that the scatter points of GWNSS, BRISQUE, NRSL, and BIQA are concentrated close to the fitting curves, indicating high objective–subjective consistency.

Fig. 8

Scatter plots of the subjective and objective consistency scores of GWNSS, BRISQUE, NRSL, and BIQA methods in the ORSID database.

JEI_28_1_013033_f008.png

4.3.

Comparison of Subjective Consistency with Other Objective IQA Methods in the LIVE Database and the LIVEMD Database.

There are 29 different reference images and 779 distorted images in the LIVE database. The distortion types include JP2K, JPEG, WN, BLUR, and fast fading (FF), and the subjective DMOS of distorted images are given as well. There are 15 different reference images and 450 multiply distorted images in the LIVEMD database. The distortion types include BLUR followed by JPEG (BJ) and BLUR followed by noise (BN). The subjective DMOS of multiply distorted images are given as well.

The subjective consistency performance of the four FR-IQA methods (PSNR, SSIM,22 FSIM,23 and VIF24), the six NR-IQA methods (BLIINDS-II,2 BRISQUE,4,5 SSEQ,3 BHOD,25 BIQA,26 and NRSL6), and one deep learning-based method on the use of deep learning for blind IQA (DeepBIQ)27 in the LIVE database is shown in Table 5. The performance indices of these methods in the LIVEMD database are shown in Table 6. 80% of all distorted images are randomly selected as the training set and 20% as the test set. The above experiments are repeated 1000 times to obtain the median of the subjective consistency performance.

Table 5

Comparison of the subjective consistency of different NR-IQA methods in the LIVE database.

Performance indexesMethodsJP2KJPEGWNBLURFFALL
SROCCPSNR0.85450.87490.93970.72660.85990.8526
SSIM0.98380.98360.98220.97190.97420.9729
FSIM0.98440.98720.98350.98300.97200.9813
VIF0.97900.97990.98940.98190.97740.9765
BLIINDS-II0.95340.96140.97010.93300.91800.9428
BRISQUE0.93720.92950.98790.95180.91850.9464
SSEQ0.94170.96890.97470.94760.89460.9284
BHOD0.94620.94610.97280.96490.91980.9316
BIQA0.92970.95660.98880.95560.94200.9578
NRSL0.95140.95080.98010.94200.90290.9493
DeepBIQa0.97
GWNSS0.95780.95220.98740.98020.92270.9609
PLCCPSNR0.86030.88190.92620.75360.85710.8470
SSIM0.98100.98550.98960.96300.96810.9648
FSIM0.98750.98940.98520.97700.96380.9743
VIF0.98640.99290.99340.98450.97190.9758
BLIINDS-II0.96880.98150.98060.93160.93760.9388
BRISQUE0.95430.96740.99310.96000.94080.9562
SSEQ0.95740.98460.98040.95770.92250.9295
BHOD0.96620.97260.98010.96600.94450.9432
BIQA0.94690.98500.99360.96550.95920.9644
NRSL0.96710.97640.98680.94470.92380.9574
DeepBIQa0.98
GWNSS0.97450.97870.99250.98410.94700.9673
KROCCPSNR0.66070.68400.80190.53720.67010.6578
SSIM0.89580.90530.8940.85810.87220.8635
FSIM0.91290.92180.89760.89590.87340.8898
VIF0.88140.89570.91520.88800.87400.8666
BLIINDS-II0.83180.85510.86590.79200.77180.7967
BRISQUE0.79750.79660.92700.84130.77210.8124
SSEQ0.80750.86570.88520.82090.75350.7848
BHOD0.81800.82060.87560.85470.77600.7926
BIQA0.78210.85050.92730.83700.81130.8330
NRSL0.82430.82940.89160.80810.75350.8184
GWNSS0.84020.83580.92050.89480.77920.8404
RMSEPSNR12.860815.017510.551012.141714.676914.5263
SSIM5.61325.98414.76356.45927.85258.2263
FSIM4.55575.10875.68535.11288.35477.0428
VIF4.75954.44943.79474.20276.36826.8427
BLIINDS-II7.11436.73366.50528.612010.882610.7628
BRISQUE8.52368.87753.90446.677710.44829.1711
SSEQ8.32126.14726.59116.841711.780311.5335
BHOD7.48218.17356.55446.166910.181310.3761
BIQA9.26256.60533.68436.22178.78428.2580
NRSL7.31117.56845.38467.850411.93489.7637
GWNSS6.43797.20524.03904.428410.02807.9198

aThe values in the rows of DeepBIQ are experimental results in the original paper. The original paper does not give the scores for specific distortion, only gives the overall scores of SROCC and PLCC for all distorted images in the LIVE database.

Table 6

Comparison of the subjective consistency of different NR-IQA methods in the LIVEMD database.

Performance indicesMethodsBJBNALL
SROCCPSNR0.63950.61500.5784
SSIM0.84880.87600.8604
FSIM0.85560.86910.8666
VIF0.87880.88070.8823
BLIINDS-II0.90700.87060.8866
BRISQUE0.90710.90340.8952
SSEQ0.87430.85820.8560
BHOD0.89310.93100.9065
BIQA0.88620.80200.8133
NRSL0.88130.90790.8901
GWNSS0.91930.92320.9222
PLCCPSNR0.70260.71640.6729
SSIM0.79710.83330.8152
FSIM0.81900.82330.8211
VIF0.90520.84920.9013
BLIINDS-II0.93320.88430.9045
BRISQUE0.93460.91990.9209
SSEQ0.91190.87040.8737
BHOD0.92510.93620.9179
BIQA0.91990.82340.8520
NRSL0.92530.91900.9098
GWNSS0.94040.93560.9338
KROCCPSNR0.45500.44450.4116
SSIM0.65200.68670.6695
FSIM0.66250.67500.6768
VIF0.69220.69300.6970
BLIINDS-II0.74340.68890.7095
BRISQUE0.73740.74180.7238
SSEQ0.70070.66730.6626
BHOD0.72870.78180.7364
BIQA0.71520.62220.6292
NRSL0.70910.74420.7138
GWNSS0.76360.76740.7643
RMSEPSNR13.634113.015113.9892
SSIM11.570510.312012.9355
FSIM10.993010.589010.7942
VIF8.14279.85008.1945
BLIINDS-II6.77718.50137.8832
BRISQUE6.76557.05937.4149
SSEQ7.67469.06599.2097
BHOD7.08806.48407.4597
BIQA7.452110.27819.7848
NRSL7.23677.34097.8356
GWNSS6.31606.52256.7329

Tables 5 and 6 show that the proposed GWNSS method has high subjective consistency. The performance of the 11 methods for 5 types of distorted images in the LIVE database is evaluated by 4 correlation coefficient indices, yielding 20 indices for per method. The proposed GWNSS has 16 out of 20 indices in the top 3 of respective distortion categories. Taking all distortion images in the LIVEMD database together, all four correlation coefficient indices of the proposed GWNSS method are the best among all IQA methods. The proposed GWNSS method, even when compared with the FR-IQA methods, still shows relatively high subjective consistency. The performance of GWNSS is superior to PSNR method, close to SSIM, FSIM, and VIF methods in the LIVE database and it is superior to PSNR, SSIM, FSIM, and VIF methods in the LIVEMD database.

Taking all distortion images in the LIVE database together, KROCC and RMSE of the proposed GWNSS method are the best among all IQA methods. When compared with deep learning-based method DeepBIQ, SROCC and PLCC are merely 0.01 less than DeepBIQ. The reason is that the features extracted by CNN-based method are sufficient, leading to a good performance. However, GWNSS is more efficient than DeepBIQ, which can efficiently extract features and conduct training. In addition, the GWNSS has low requirement for hardware and can be used in wider applications.

The scatter plots of the subjective and objective consistency scores of GWNSS, BRISQUE, NRSL, and BIQA methods are shown in Fig. 9. The x axis denotes the objective score obtained by the image quality assessment method and the y axis denotes the subjective score obtained by human eyes. Figure 9 shows that the scatter points of the above four NR-IQA methods are concentrated close to the fitting curves, indicating high objective–subjective consistency.

Fig. 9

Scatter plots of the subjective and objective consistency scores of GWNSS, BRISQUE, NRSL, and BIQA methods in the LIVE database and the LIVEMD database.

JEI_28_1_013033_f009.png

4.4.

Database Independence Experiments

To verify that the performance of GWNSS is not restricted to the particular database used, database independence experiments are performed on the LIVE database and the TID2013 database.21 In the TID2013 database, the selected images for independence experiments are 24 different reference images and 480 distorted images with the same 4 common distortion categories: JP2K, JPEG, WN and BLUR. Distorted images in the LIVE database are used to train an SVM model, and then distorted images, which are selected in the TID2013 database, are tested in the trained model. The SROCC is used as the testing index. The subjective consistency performance of the four FR-IQA methods (PSNR, SSIM,22 FSIM,23 and VIF24) and the six NR-IQA methods (BLIINDS-II,2 BRISQUE,4,5 SSEQ,3 BHOD,25 BIQA,26 and NRSL6) for images of four different distortion types in the TID2013 database are shown in Table 7. Conversely, distorted images in the TID2013 database are used to train an SVM model, and then distorted images in the LIVE database are tested in the trained model. The subjective consistency performance of the four FR-IQA methods (PSNR, SSIM,22 FSIM,23 and VIF24) and the six NR-IQA methods (BLIINDS-II,2 BRISQUE,4,5 SSEQ,3 BHOD,25 BIQA,26 and NRSL6) for images of four different distortion types in the LIVE database are shown in Table 8. In Tables 7 and 8, the top 3 SROCC indices, within each distortion category, are marked in bold and the best SROCC indices are highlighted with italics.

Table 7

Comparison of the subjective consistency of different NR-IQA methods in the LIVE database (training set) and the TID2013 database (test set).

JP2KJPEGWNBLURALL
PSNR0.89040.91500.94200.96610.9216
SSIM0.94890.93160.87420.97040.9212
FSIM0.95790.93290.90030.95900.9547
VIF0.95380.92890.93020.96590.9336
BLIINDS-II0.94580.90010.77890.90770.8742
BRISQUE0.87850.90160.90080.89660.8907
SSEQ0.91080.92470.89520.89350.8692
BHOD0.91550.88150.74890.91480.8943
BIQA0.94460.90130.91570.90290.9164
NRSL0.77790.90920.84220.90940.8797
GWNSS0.92820.90280.90420.91530.9284

Table 8

Comparison of the subjective consistency of different NR-IQA methods in the LIVE database (test set) and the TID2013 database (training set).

JP2KJPEGWNBLURALL
PSNR0.90410.89460.98290.80730.8834
SSIM0.98380.98360.98220.97190.9729
FSIM0.98440.98720.98350.98300.9813
VIF0.97900.97990.98940.98190.9765
BLIINDS-II0.94040.92770.96410.89590.9348
BRISQUE0.91780.93540.93060.91820.9297
SSEQ0.92520.93430.86320.80530.8087
BHOD0.92730.92360.94440.90360.9050
BIQA0.92910.91850.98720.80930.9260
NRSL0.93000.93550.97010.84080.9130
GWNSS0.94010.94260.97460.92590.9282

Tables 7 and 8 show that all 10 indices of the proposed GWNSS method are in the top 3 for four different types of distorted images, indicating that the proposed GWNSS method achieves high database independence for all four types of distortion. Even comparing with the FR-IQA methods, GWNSS still shows relatively high database independence. The database independence of GWNSS is superior to PSNR method and close to SSIM, FSIM, VIF methods.

4.5.

Accuracy of the Distortion Type Judgment of the GWNSS Method

Table 9 shows the accuracy of the GWNSS method in determining the type of image distortion. 80% of all distorted images are randomly selected as the training set and 20% as the test set, then the training set and the test set are entered into an SVM model for training and testing. The above experiment is repeated 1000 times to obtain the median of the subjective consistency performance for the ORSID database. The experimental results show that the GWNSS method is up to 95% accurate in determining the type of image distortion on the whole ORSID database, demonstrating that the GWNSS method performs well in classifying the type of image distortion.

Table 9

Accuracy of the distortion type judgment of the GWNSS method in the ORSID database.

JP2KWNBLURALL
Accuracy (%)95959595

The classification performance for different distortion types in the form of an average confused matrix is shown in Fig. 10. The numerical values are means of the confusion probabilities obtained over 1000 experiments. Figure 10 shows that the most accurate prediction of the distortion type is WN. As for BLUR and JP2K, they confuse with each other, with 0.0479 of the BLUR mistaken as JP2K, and 0.0317 of JP2K mistaken as BLUR. This is because that JP2K can introduce blur into the image, resulting in confusion with BLUR.

Fig. 10

Accuracy of the distortion type judgment of the GWNSS method in the ORSID database.

JEI_28_1_013033_f010.png

4.6.

Time Consumption of the GWNSS

Since the runtime of NR-IQA methods is mainly spent on extracting image features, the comparison of mean time spent for feature extraction of all images in the ORSID database of the five good performance NR-IQA methods (BLIINDS-II,2 BRISQUE,4,5 SSEQ,3 BIQA,26 and NRSL6) and GWNSS are shown in Table 10. Table 10 shows that the mean time spent by the proposed GWNSS method is far less than that of SSEQ and BLIINDS-II. On average, the proposed GWNSS method only spent 0.1790 s more than that of the BRISQUE method and 0.2114 s more than that of the BIQA method. Thus the proposed GWNSS method has high evaluation accuracy and operation efficiency.

Table 10

Mean time spent extracting all images features by different NR-IQA methods in the ORSID database.

SSEQBLIINDS-IIBRISQUEBIQANRSLGWNSS
Mean time (s)2.975282.37110.14980.11740.32970.3288

5.

Conclusion

In this paper, a 36-D image feature vector consists of the local normalized luminance features and the gradient-weighted LBP features of local normalized luminance map in three scales. First, the feature matrix and the corresponding distortion type are used to train the SVM classifier. Then on the basis of the SVM classifier, the feature matrix and the corresponding DMOS are used to train the SVR scorer. A series of comparative experiments were carried out in the ORSID database, the MDORSID database, the LIVE database, the LIVEMD database, and the TID2013 database, respectively. Experimental results show that the proposed method has high accuracy in distortion type classification of remote sensing images, high consistency with subjective scores, and high robustness for different types of distortions. In addition, the efficacy of the proposed method is not restricted to a particular database and the operation efficiency is high. The research of this paper mainly focuses on single-distorted images. Assessment of multiply distorted images, which is of more practical significance, will be addressed in the future research.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Nos. 61471194 and 61705104), Science and Technology on Avionics Integration Laboratory and Aeronautical Science Foundation of China (No. 20155552050), and the Natural Science Foundation of Jiangsu Province (No. BK20170804).

References

1. 

A. K. Moorthy and A. C. Bovik, “A two-step framework for constructing blind image quality indices,” IEEE Signal Process. Lett., 17 (5), 513 –516 (2010). https://doi.org/10.1109/LSP.2010.2043888 IESPEJ 1070-9908 Google Scholar

2. 

M. A. Saad, A. C. Bovik and C. Charrier, “Blind image quality assessment: a natural scene statistics approach in the DCT domain,” IEEE Trans. Image Process., 21 (8), 3339 –3352 (2012). https://doi.org/10.1109/TIP.2012.2191563 IIPRE4 1057-7149 Google Scholar

3. 

L. Liu et al., “No-reference image quality assessment based on spatial and spectral entropies,” Signal Process.: Image Commun., 29 (8), 856 –863 (2014). https://doi.org/10.1016/j.image.2014.06.006 SPICEF 0923-5965 Google Scholar

4. 

A. Mittal, A. K. Moorthy and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. Image Process., 21 (12), 4695 –4708 (2012). https://doi.org/10.1109/TIP.2012.2214050 IIPRE4 1057-7149 Google Scholar

5. 

A. Mittal, A. K. Moorthy and A. C. Bovik, “Blind/reference less image spatial quality evaluator,” in IEEE Record of the Forty Fifth Asilomar Conf. Signals Syst. Comput., 723 –727 (2011). https://doi.org/10.1109/ACSSC.2011.6190099 Google Scholar

6. 

Q. Li et al., “Blind image quality assessment using statistical structural and luminance features,” IEEE Trans. Multimedia, 18 (12), 2457 –2469 (2016). https://doi.org/10.1109/TMM.2016.2601028 Google Scholar

7. 

L. Liu et al., “Blind image quality assessment by relative gradient statistics and adaboosting neural network,” Signal Process. Image Commun., 40 (C), 1 –15 (2016). https://doi.org/10.1016/j.image.2015.10.005 SPICEF 0923-5965 Google Scholar

8. 

D. L. Ruderman, “The statistics of natural images,” Network Comput. Neural Syst., 5 (4), 517 –548 (1994). https://doi.org/10.1088/0954-898X_5_4_006 Google Scholar

9. 

B. Li, R. Yang and H. Jiang, “Remote-sensing image compression using two-dimensional oriented wavelet transform,” IEEE Trans. Geosci. Remote Sens., 49 (1), 236 –250 (2011). https://doi.org/10.1109/TGRS.2010.2056691 IGRSD2 0196-2892 Google Scholar

10. 

J. Yan et al., “Remote sensing image quality assessment based on the ratio of spatial feature weighted mutual information,” J. Imaging Sci. Technol., 62 (2), 0205051 (2018). https://doi.org/10.2352/J.ImagingSci. Technol.2018.62.2.020505 JIMTE6 1062-3701 Google Scholar

11. 

J. R. Hosking, “L-moments: analysis and estimation of distributions using linear combinations of order statistics,” J. R. Stat. Soc., 52 (1), 105 –124 (1990). https://doi.org/10.1111/j.2517-6161.1990.tb01775.x 0952-8385 Google Scholar

12. 

J. R. Hosking, “Moments or L moments? An example comparing two measures of distributional shape,” Am. Stat., 46 (3), 186 –189 (1992). https://doi.org/10.1080/00031305.1992.10475880 Google Scholar

13. 

J. R. Hosking, “On the characterization of distributions by their L-moments,” J. Stat. Plann. Inference, 136 (1), 193 –198 (2006). https://doi.org/10.1016/j.jspi.2004.06.004 Google Scholar

14. 

A. Mittal, A. K. Moorthy and A. C. Bovik, “Making image quality assessment robust,” in Conf. Record Forty Sixth Asilomar Conf. Signals, Syst. and Comput., 1718 –1722 (2015). https://doi.org/10.1109/ACSSC.2012.6489326 Google Scholar

15. 

T. Ojala, M. Pietikäinen and T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell., 24 (7), 971 –987 (2002). https://doi.org/10.1109/TPAMI.2002.1017623 ITPIDJ 0162-8828 Google Scholar

16. 

T. Ojala et al., “Texture discrimination with multidimensional distributions of signed gray level differences,” Pattern Recognit., 34 (3), 727 –739 (2001). https://doi.org/10.1016/S0031-3203(00)00010-8 Google Scholar

17. 

Q. Li, W. Lin and Y. Fang, “No-reference quality assessment for multiply-distorted images in gradient domain,” IEEE Signal Process. Lett., 23 (4), 541 –545 (2016). https://doi.org/10.1109/LSP.2016.2537321 IESPEJ 1070-9908 Google Scholar

18. 

H. R. Sheikh et al., “A statistical evaluation of recent full reference quality assessment algorithms,” IEEE Trans. Image Process., 15 (11), 3440 –3451 (2006). Google Scholar

19. 

H. R. Sheikh, M. F. Sabir and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process., 15 (11), 3440 –3451 (2006). https://doi.org/10.1109/TIP.2006.881959 IIPRE4 1057-7149 Google Scholar

20. 

D. Jayaraman et al., “Objective quality assessment of multiply distorted images,” in Conf. Record of the Forty Sixth Asilomar Conf. Signals, Syst. and Comput., 1693 –1697 (2012). https://doi.org/10.1109/ACSSC.2012.6489321 Google Scholar

21. 

N. Ponomarenko et al., “Color image database TID2013: peculiarities and preliminary results,” 106 –111 (2013). Google Scholar

22. 

Z. Wang et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., 13 (4), 600 –612 (2004). https://doi.org/10.1109/TIP.2003.819861 IIPRE4 1057-7149 Google Scholar

23. 

L. Zhang et al., “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process., 20 (8), 2378 –2386 (2011). https://doi.org/10.1109/TIP.2011.2109730 IIPRE4 1057-7149 Google Scholar

24. 

H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., (2004). Google Scholar

25. 

Q. Li, W. Lin and F. Fang, “No-reference image quality assessment based on high order derivatives,” in IEEE Int. Conf. Multimedia and Expo, 1 –6 (2016). https://doi.org/10.1109/ICME.2016.7552997 Google Scholar

26. 

W. Xue et al., “Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features,” IEEE Trans. Image Process., 23 (11), 4850 –4862 (2014). https://doi.org/10.1109/TIP.2014.2355716 IIPRE4 1057-7149 Google Scholar

27. 

S. Bianco et al., “On the use of deep learning for blind image quality assessment,” Signal Image Video Process., 12 355 –362 (2018). https://doi.org/10.1007/s11760-017-1166-8 Google Scholar

Biography

Junhua Yan received her BSc, MSc, and PhD degrees from Nanjing University of Aeronautics and Astronautics in 1993, 2001, and 2004, respectively. She is a professor at Nanjing University of Aeronautics and Astronautics. She has been an academic visitor at the University of Sussex (October 31, 2016 to October 30, 2017). She is the author of more than 40 journal papers and has 5 patents. Her current research interests include image quality assessment, multisource information fusion, target detection, tracking, and recognition.

Xuehan Bai is a graduate student at Nanjing University of Aeronautics and Astronautics. Her research interest is image quality assessment. She received her BSc degree from Harbin Institute of Technology in 2016.

Yongqi Xiao received his BSc and MSc degrees from Nanjing University of Aeronautics and Astronautics in 2015 and 2018, respectively. His research interest is image quality assessment.

Yin Zhang is a lecturer at Nanjing University of Aeronautics and Astronautics. He received his BSc degree in optical information sciences and technology from Jilin University in 2009, his MSc and PhD degrees from Harbin Institute of Technology in 2011 and 2016, respectively. He is the author of more than 10 journal papers and has 7 patents. His current research interests include remote-sensing information processing, image quality assessment, and radiation transfer calculation.

Xiangyang Lv is a graduate student at Nanjing University of Aeronautics and Astronautics. His research interest is image quality assessment. He received his BSc degree from Nanjing University of Aeronautics and Astronautics in 2018.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Junhua Yan, Xuehan Bai, Yongqi Xiao, Yin Zhang, and Xiangyang Lv "No-reference remote sensing image quality assessment based on gradient-weighted natural scene statistics in spatial domain," Journal of Electronic Imaging 28(1), 013033 (12 February 2019). https://doi.org/10.1117/1.JEI.28.1.013033
Received: 17 May 2018; Accepted: 24 January 2019; Published: 12 February 2019
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Distortion

Databases

Image quality

Remote sensing

Feature extraction

Data modeling

Binary data

Back to Top