The use of convolutional neural networks (CNNs) for general no-reference image quality assessment (NR-IQA) has seen tremendous growth in the research community. Most these methods used the patches cropped from the original images for training. For these patch-based methods, the ‘ground truth’ quality of patches is essential. In practice, these methods often took the quality score of an original image directly as the labels of its patches’ quality. However, the perceptual quality of image patches generally differs from the corresponding image quality. Thus, the noise in patches’ labels may hinder effective training of the CNN. In this paper, we propose a CNN with two branches for general noreference image quality assessment. One branch of this model predicts the patch quality, and the other predicts the uncertainty, which denotes the degree of deviation of the patch quality from the image quality. Our model can be trained in an end-to-end manner by minimizing a joint loss. We tested our model on widely used image quality databases and showed that it performed better or comparable with those of state-of-the-art NR-IQA algorithms.
We propose a multitask convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA). We decompose the task of rating image quality into two subtasks, namely distortion identification and distortion-level estimation, and then combine the results of the two subtasks to obtain a final image quality score. Unlike conventional multitask convolutional networks, wherein only the early layers are shared and the subsequent layers are different for each subtask, our model shares almost all the layers by integrating a dictionary into the CNN. Moreover, it is trained in an end-to-end manner, and all the parameters, including the weights of the convolutional layers and the codewords of the dictionary, are simultaneously learned from the loss function. We test our method on widely used image quality databases and show that its performance is comparable with those of state-of-the-art general-purpose NR-IQA algorithms.
We propose a deep convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA), i.e., accurate prediction of image quality without a reference image. The proposed model consists of three components such as a local feature extractor that is a fully CNN, an encoding module with an inherent dictionary that aggregates local features to output a fixed-length global quality-aware image representation, and a regression module that maps the representation to an image quality score. Our model can be trained in an end-to-end manner, and all of the parameters, including the weights of the convolutional layers, the dictionary, and the regression weights, are simultaneously learned from the loss function. In addition, the model can predict quality scores for input images of arbitrary sizes in a single step. We tested our method on commonly used image quality databases and showed that its performance is comparable with that of state-of-the-art general-purpose NR-IQA algorithms.