This article proposes to deal with noisy and variable size color textures. It also proposes to deal with quantization methods and to see how such methods change final results. The method we use to analyze the robustness of the textures consists of an auto-classification of modified textures. Texture parameters are computed for a set of original texture samples and stored into a database. Such a database is created for each quantization method. Textures from the set of original samples are then modified, eventually quantized and classified according to classes determined from a precomputed database. A classification is considered incorrect if the original texture is not retrieved. This method is tested with 3 textures parameters: auto-correlation matrix, co-occurrence matrix and directional local extrema as well as 3 quantization methods: principal component analysis, color cube slicing and RGB binary space slicing. These two last methods compute only 3 RGB bands but could be extended to more. Our results show that, with or without quantization, autocorrelation matrix parameter is less sensitive to noise and to scaling than the two other tested texture parameters. This implies that autocorrelation matrix should probably be preferred for texture analysis with non controlled condition, typically industrial applications where images could be noisy. Our results also shows that PCA quantization does not change results where the two other quantization methods change them dramatically.
Modern digital imaging workflows typically involve a large number of different imaging technologies and media. In order to assure the quality of such workflows, there is a need to quantify how reproduced images have been changed by the reproduction process, and how much these changes are perceived by the human eye. The goal of this study is to investigate whether current color image difference formulae can be used to this end, specifically with regards to the image degradations induced by color gamut mapping.
We have applied image difference formulae based on CIELAB, S-CIELAB, and iCAM to a set of images, which have been processed by several state-of-the-art color gamut mapping algorithms. The images have also been evaluated by psychophysical experiments on a CRT monitor. We have not found any statistically significant correlation between the calculated color image differences and the visual evaluations.
We have examined the experimental results carefully, in order to understand the poor performance of the color difference calculations, and to identify possible strategies for improving the formulae. For example, S-CIELAB and iCAM were designed to take into account factors such as spatial properties of human vision, but there might be other important factors to be considered to quantify image quality. Potential factors include background/texture/contrast sensitivity effect, human viewing behaviour/area of interest, and memory colors.
If digital cameras and scanners are to be used for colour measurement it is necessary to correct their device responses to device-independent colour co-ordinates, such as CIE tristimulus values. In order to do this it is sufficient to recover the underlying spectral reflectance functions from a scene at each pixel. Traditionally, linear methods are used to transform device responses to reflectance values. Recently, however, several non-linear methods have been applied to this problem, including generic methods such as neural networks, more novel approaches such as sub-manifold approximation and approaches based upon quadratic programming.
In this paper we apply polynomial models to the recovery of reflectance. We perform a number of simulations with both tri-chromatic and multispectral imaging systems to determine their accuracy and generalisation performance. We find that, although higher order polynomials seem to be superior to linear methods in terms of accuracy, the generalisation performance for the two methods is approximately equivalent. This suggests that the advantage of polynomial models may only be seen when the training and test data are statistically similar. Furthermore, the experiments with multispectral systems suggest that the improvement using high order polynomials on training data is reduced when the number of sensors is increased.