We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters.
We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
We present a novel technique for the problem of super-resolution of facial data. The method uses a patch-based technique, and for each low-resolution input image patch, we seek the best matching patches from a database of face images using the Coherency Sensitive Hashing technique. Coherency Sensitive Hashing relies on hashing to combine image coherence cues and image appearance cues to effectively find matching patches in images. This differs from existing methods that apply a high-pass filter on input patches to extract local features. We then perform a weighted sum of the best matching patches to get the enhanced image. We compare with state-of-the-art techniques and observe that the approach provides better performance in terms of both visual quality and reconstruction error.