Digital restoration of film content that has historical value is crucial for the preservation of cultural heritage. Also, digital restoration is not only a relevant application area of various video processing technologies that have been developed in computer graphics literature but also involves a multitude of unresolved research challenges. Currently, the digital restoration workflow is highly labor intensive and often heavily relies on expert knowledge. We revisit some key steps of this workflow and propose semiautomatic methods for performing them. To do that we build upon state-of-the-art video processing techniques by adding the components necessary for enabling (i) restoration of chemically degraded colors of the film stock, (ii) removal of excessive film grain through spatiotemporal filtering, and (iii) contrast recovery by transferring contrast from the negative film stock to the positive. We show that when applied individually our tools produce compelling results and when applied in concert significantly improve the degraded input content. Building on a conceptual framework of film restoration ensures the best possible combination of tools and use of available materials.
In this paper we propose a new dataset for evaluation of image/video quality metrics with emphasis on applications in computer graphics. The proposed dataset includes LDR-LDR, HDR-HDR, and HDR-LDR reference-test video pairs with various types of distortions. We also present an example evaluation of recent image and video quality metrics that were applied in the field of computer graphics. In this evaluation all video sequences were shown on an HDR display, and subjects were asked to mark the regions where they saw differences between test and reference videos. As a result, we capture not only the magnitude of distortions, but also their spatial distribution. This has two advantages: on one hand the local quality information is valuable for computer graphics applications, on the other hand the subjectively obtained distortion maps are easily comparable to the maps predicted by quality metrics.
In this work we simulate the effect of the human eye's maladaptation to visual perception over time through a
supra-threshold contrast perception model that comprises adaptation mechanisms. Specifically, we attempt to
visualize maladapted vision on a display device. Given the scene luminance, the model computes a measure of
perceived multi-scale contrast by taking into account spatially and temporally varying contrast sensitivity in a
maladapted state, which is then processed by the inverse model and mapped to a desired display's luminance
assuming perfect adaptation. Our system simulates the effect of maladaptation locally, and models the shifting of
peak spatial frequency sensitivity in maladapted vision in addition to the uniform decrease in contrast sensitivity
among all frequencies. Through our GPU implementation we demonstrate the visibility loss of scene details due
to maladaptation over time at an interactive speed.
Many quality metrics take as input gamma corrected images and assume that pixel code values are scaled
perceptually uniform. Although this is a valid assumption for darker displays operating in the luminance range
typical for CRT displays (from 0.1 to 80 cd/m<sup>2</sup>), it is no longer true for much brighter LCD displays (typically
up to 500 cd/m<sup>2</sup>), plasma displays (small regions up to 1000 cd/m<sup>2</sup>) and HDR displays (up to 3000 cd/m<sup>2</sup>).
The distortions that are barely visible on dark displays become clearly noticeable when shown on much brighter
displays. To estimate quality of images shown on bright displays, we propose a straightforward extension to the
popular quality metrics, such as PSNR and SSIM, that makes them capable of handling all luminance levels
visible to the human eye without altering their results for typical CRT display luminance levels. Such extended
quality metrics can be used to estimate quality of high dynamic range (HDR) images as well as account for