A combined achromatic and chromatic contrast metric for digital images and video is presented in this paper. Our work is aimed at tuning any parametric rendering algorithm in an automated way by computing how much details an observer perceives in a rendered scene. The contrast metric is based on contrast analysis in spatial domain of image sub-bands constructed by pyramidal decomposition of the image. The proposed contrast metric is the sum of the perceptual contrast of every pixel in the image at different detail levels corresponding to different viewing distances. The novel metric shows high correlation with subjective experiments. Important applications involve optimal parameter set of any image rendering and contrast enhancement technique or auto exposure of an image capturing device.
We want to integrate colourfulness in an image quality evaluation framework. This quality framework is meant to evaluate the perceptual impact of a compression algorithm or an error prone communication channel on the quality of an image. The image might go through various enhancement or compression algorithms, resulting in a different -- but not necessarily worse -- image. In other words, we will measure quality but not fidelity to the original picture.
While modern colour appearance models are able to predict the perception of colourfulness of simple patches on uniform backgrounds, there is no agreement on how to measure the overall colourfulness of a picture of a natural scene. We try to quantify the colourfulness in natural images to perceptually qualify the effect that processing or coding has on colour. We set up a psychophysical category scaling experiment, and ask people to rate images using 7 categories of colourfulness. We then fit a metric to the results, and obtain a correlation of over 90% with the experimental data. The metric is meant to be used real time on video streams. We ignored any issues related to hue in this paper.
Nowadays, the ability to create panoramic photographs is included with most of the commercial digital cameras. The principle is to shoot several pictures and stitch them together to build a panorama. To ensure the quality of the final image, the different pictures have to be perfectly aligned and the colors of the images should match. While the alignment of images has received a lot of attention from the computer vision community, the mismatch in colors was often ignored and handled using smooth transitions from one picture to the next to mask the mismatch. This paper presents a method to simultaneously estimate the alignment of the pictures and the color transformation between them. By estimating the color transformation from the scene to the pixels, the method is able to remove the mismatch in colors of the different images, and thus leads to better image quality.