You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
7 March 2014No training blind image quality assessment
State of the art blind image quality assessment (IQA) methods generally extract perceptual features from the training
images, and send them into support vector machine (SVM) to learn the regression model, which could be used to further
predict the quality scores of the testing images. However, these methods need complicated training and learning, and the
evaluation results are sensitive to image contents and learning strategies. In this paper, two novel blind IQA metrics
without training and learning are firstly proposed.
The new methods extract perceptual features, i.e., the shape consistency of conditional histograms, from the joint
histograms of neighboring divisive normalization transform coefficients of distorted images, and then compare the length
attribute of the extracted features with that of the reference images and degraded images in the LIVE database. For the
first method, a cluster center is found in the feature attribute space of the natural reference images, and the distance
between the feature attribute of the distorted image and the cluster center is adopted as the quality label. The second
method utilizes the feature attributes and subjective scores of all the images in the LIVE database to construct a
dictionary, and the final quality score is calculated by interpolating the subjective scores of nearby words in the
dictionary.
Unlike the traditional SVM based blind IQA methods, the proposed metrics have explicit expressions, which reflect the
relationships of the perceptual features and the image quality well. Experiment results in the publicly available databases
such as LIVE, CSIQ and TID2008 had shown the effectiveness of the proposed methods, and the performances are fairly
acceptable.
The alert did not successfully save. Please try again later.
Ying Chu, Xuanqin Mou, Zhen Ji, "No training blind image quality assessment," Proc. SPIE 9023, Digital Photography X, 90230B (7 March 2014); https://doi.org/10.1117/12.2042461