Image Quality assessment (IQA) is a tricky field to master, as it attempts to measure the quality of an image with reference to the complex human visual system. In IQA, there are three dominant strands of research, namely: fullreference, reduced-reference and no-reference image quality assessment. No-reference image quality assessment is the hardest one to achieve, as the reference images required for determining the quality of the given images are not available. In one of our previous papers, we quantified no-reference IQA, using state-of-the-art multitasking neural networks, particularly the VGG-16 and shallow neural networks. We achieved good accuracy for the classification of most distortions. However, one of the drawbacks of the networks used was that the classification accuracy was not good for JPEG2000 compressed images. These images were classified incorrectly as blurry or noisy images. In this paper, we try to classify compressed images more accurately using residual neural networks (ResNets). These deep learning models were built based upon micro-architecture modules and are specific task-focused entities, each one determining the distortion type and distortion level of an artifact present in the image. The test images were obtained from the LIVE II, CSIQ, and TID2013 databases for comparison with previous work. In contrast to our previous approach, where the training was limited to one specific distortion at a time, we train the collection of ResNets with all the possible distortion types present in the test databases. Preprocessing of the images is done using local contrast normalization and global contrast normalization methods. All the hyper-parameters in the ResNets collection, such as activation functions, dropout regularizations, optimizers are tuned to produce optimal classification accuracy. The results are evaluated with different methods such as PLCC, SROCC and MSE and high linear correlation is achieved using the ResNets collection and compared to previous results.
A no-reference image quality assessment technique can measure the visual distortion in an image without any reference image data. Image distortions can be caused through the acquisition, compression or transmission of digital images. From the several types of image distortions, JPEG and JPEG2000 compression distortions, addition of white noise, Gaussian blur and fast fading are the most common ones. A typical real-world image may have multiple types of distortion. Our aim is to determine the different types of distortion that are present in an image and find the total distortion levels using a novel architecture using multiple Deep Convolutional Neural Networks (MDNN). The proposed model will classify different types of distortion that are present in an image thereby achieving both these objectives. Initially, local contrast normalization (LCN) is performed on images which are fed into the deep neural network for training. The images are then processed by a convolution-based distortion classifier which estimates the probability of each distortion type. Next, the distortion quality is predicted for each class. These probabilities are fused using the weighted average-pooling algorithm to get a single regressor output. We also experimented on the different parameters of the neural network, including optimizers (Adam, Adadelta, SGD, Rmsprop) and activation functions (RELU, SoftMax, Sigmoid, and Linear). The LIVE II database is used for the training, since it has five of the major distortion types. Cross-dataset validation is done on the CSIQ and TID2008 database. The results were evaluated using different correlation coefficients (SORCC, PLCC) and we achieved a linear correlation with the differential mean opinion scores (DMOS) for each of these coefficients in the tests conducted.