In this paper we propose a new dataset for evaluation of image/video quality metrics with emphasis on applications in computer graphics. The proposed dataset includes LDR-LDR, HDR-HDR, and HDR-LDR reference-test video pairs with various types of distortions. We also present an example evaluation of recent image and video quality metrics that were applied in the field of computer graphics. In this evaluation all video sequences were shown on an HDR display, and subjects were asked to mark the regions where they saw differences between test and reference videos. As a result, we capture not only the magnitude of distortions, but also their spatial distribution. This has two advantages: on one hand the local quality information is valuable for computer graphics applications, on the other hand the subjectively obtained distortion maps are easily comparable to the maps predicted by quality metrics.
In this work we simulate the effect of the human eye's maladaptation to visual perception over time through a
supra-threshold contrast perception model that comprises adaptation mechanisms. Specifically, we attempt to
visualize maladapted vision on a display device. Given the scene luminance, the model computes a measure of
perceived multi-scale contrast by taking into account spatially and temporally varying contrast sensitivity in a
maladapted state, which is then processed by the inverse model and mapped to a desired display's luminance
assuming perfect adaptation. Our system simulates the effect of maladaptation locally, and models the shifting of
peak spatial frequency sensitivity in maladapted vision in addition to the uniform decrease in contrast sensitivity
among all frequencies. Through our GPU implementation we demonstrate the visibility loss of scene details due
to maladaptation over time at an interactive speed.