With the popularization of digital cameras, the demand for object image quality assessment algorithms has risen. As a way to choose a best image for final applications, object image quality assessment algorithms play an important role in image engineering systems. Since ideal reference images usually cannot be found in practice, the assessment problem becomes no-reference (NR) image quality assessment, which assumes that the true scene of a distorted image is unknown.
Blur is the most common type for quality degradation in imaging systems and its main cause is due to the focus variation or position motion of the camera. Blur is usually modeled by a smoothing of the high frequency components of Fourier coefficients in spectrum space. Several methods were proposed for blurred image metric. In Ref. 1, the authors exploited the principle that high frequency coefficients of blurred images tend to zero, and proposed a quality evaluation algorithm by cumulating the coefficient distribution of images after the discrete cosine transform (DCT). Since the central diagonal of the DCT coefficient matrix can efficiently characterize global blur, the quality measure was obtained by counting numbers from a weighting matrix, which gives more importance to the diagonal. We mark this method as the DCT metric (DCTM). In Ref. 2, a perceptual blur no-reference metric based on edge length was launched. This work first proposed the conception of edge width realized by computing the distance from the start to the end positions of the Sobel edge. The global blur measure was obtained by averaging all edge widths. We denote this method as the edge width metric (EWM). In Ref. 3, the authors proposed an algorithm that utilized human visual system (HVS) features to improve metric performance. In this method, the image was first divided into blocks of and marked based on their edge count. Then, the average edge length for each block was computed and weighted based on the contrast of the block. The final blur measure was realized by the weighted average edge length. We mark this metric as the HVS edge width metric (HVSEWM). In Ref. 4, the authors proposed an algorithm based on local phase coherence. The metric utilized the local phase coherence characteristics, and constructed an iterative algorithm that separates bands into coherent wavelet coefficients and incoherent coefficients. By calculating the mean of standard deviations of incoherent coefficients in each band, the metric was founded. We symbol this local phase coherence metric LPCM.
In this work, based on the blur theory and block-based DCT statistics in Refs. 5, 6, we propose a novel no-reference objective metric for blurred image assessment, and evaluate its performance against four quality evaluation metrics on three public databases.
Blur Metric Based on Block-Based Discrete Cosine Transform Statistics
According to Ref. 6, DCT coefficient data distribution of natural images is well modeled by a Laplace distribution in certain blocks. Using blocks, for each frequency pair and , the coefficient’s distribution is thus modeled byis the feature parameter of distribution for frequency pair , and is the coefficient value. The estimate for is generally computed by using the maximum likelihood (ML) method on original coefficient data. To a given frequency, an ML estimate result for is given by represents the number of DCT blocks, stands for DCT coefficients at that frequency, and represents the expected value.
According to the image degradation theory, blurred images can be created by directly multiplying clear images with certain blur point spread functions (PSFs) in spectrum space. The classic blur PSFs were thoroughly analyzed in Ref. 5, including motion, out of focus, and Gaussian PSF. The curve shapes of these PSFs are similar in spectrum space: they attain the maximum value at the center frequency (0,0), decrease dramatically near the center frequency, and maintain lower expected values with small fluctuations along with frequencies increasing. And the blur extent is mainly determined by how violently the blur PSF decreases near the center frequency. With blur PSF working on an image, the Fourier coefficients of the blurred image at center frequency will have a big descent based on blur PSF discussed before. Since spectrum values are symmetry, the expected coefficient value varies like a step function jumping from large to small with increasing . Then , the inverse of , also varies like a step function while jumping from small to large with increasing . The jump position and gradient of step function determines the blur extent. This phenomenon can also be testified by viewing the distribution map of one image with a different blur radius.
To model the feature of this step function well, we use a logistic function in 2-D polar coordinates to simulate distribution in the frequency domain.
In Eq. 3, , and , , and are parameters that need to be estimated. And image quality can be determined by , , and .stands for image quality, and is a function only determined by , , and . Since the nonlinear estimation of , , and cause overwhelming burdens of computation and usually generate large errors, here we launch a fast algorithm. Consider that the formula can be reformed as is used in the reforming process. Thus, we believe , , and are linearly or polynomial-linearly correlated with : . Then as a result, the function could be approximated by . are scale coefficients. In fact, can be determined by the least mean square (LMS) method on certain images with known quality. With all of known, our blurred image quality assessment algorithm is certain. For a given blurred image, its quality can be calculated by the following algorithm.
1. Cut image into blocks and exert DCT on each block.
2. Count coefficients in each and estimate by ML criteria in Eq. 2.
3. Let , , for each pair , .
The final is its quality. We call this method the DCT statistic prediction method (DCTSP).
Experiment and Results
To ascertain the coefficients in DCTSP that we proposed, we calculate coefficients of DCTSP on the LIVE database from University of Texas.7 The values of calculated by least mean square (LMS) criteria are shown in Table 1.
The value of g(i,j) calculated by the LMS method on the LIVE database.
Since DCTSP was determined by optimizing on the LIVE database, to assess its performance fairly, DCTSP was also applied to other databases. We chose the CSIQ8 database at Oklahoma State University and the TID20089 database. There are 145, 150, and 100 blurred images in LIVE, CSIQ, and TID2008 databases, respectively. Here, we use a five parameter logistic function to predict subject evaluation. To evaluate objectively the predictive performance of our metric, four indicators are computed: corre-lation coefficient (CC), root mean squared error (RMSE), Spearman rank-order correlation coefficient (SROCC), and outlier ratio (OR), while the definition of these indicators can be found in Ref. 10. The larger CC and SROCC are, the smaller RMSE and OR are, and the better the metric’s performance is.
We compare the proposed method with metrics talked about in Sec. 1. Table 2 showed the performance of these no-reference blur measures, including DCTSP on the LIVE, CSIQ, and TID2008 databases. From Table 2, DCTSP11 shows the best predictive performances against other blur measures, especially on correlation coefficients (CC). Although coefficients of DCTSP were determined from the LIVE database, it showed better generalization on other databases.
Performance comparison of different image quality assessment methods on LIVE, CSIQ, and TID2008. Note that OR cannot be calculated in TID2008, since standard deviation was not provided.