We propose a robust photometric stereo method by using structural arrangement of light sources. In the arrangement, light sources are positioned on a planar grid and form a set of collinear combinations. The shadow pixels are detected by adaptive thresholding. The specular highlight and diffuse pixels are distinguished according to their intensity deviations of the collinear combinations, thanks to the special arrangement of light sources. The highlight detection problem is cast as a pattern classification problem and is solved using support vector machine classifiers. Considering the possible misclassification of highlight pixels, the ℓ1 regularization is further employed in normal map estimation. Experimental results on both synthetic and real-world scenes verify that the proposed method can robustly recover the surface normal maps in the case of heavy specular reflection and outperforms the state-of-the-art techniques.
In this paper, we propose a novel post-alignment method. The method is both simple and effective for stereo video postproduction.
A low-distortion algorithm for rectifying the epipolar lines is first introduced. Unlike traditional methods,
which map the epipoles to (1,0,0) <sup>T</sup> directly, our method conducts it in two steps: 1) mapping the epipoles to points at
infinity; 2) aligning the epipolar lines with x-axis. More specifically, by taking advantage of that commonly available
stereoscopic movies are nearly aligned, our method keeps one of the stereo images unchanged, and the rectification is
only applied to the other image. Besides epipolar non-parallel distortions, disparity distortion is also an important issue
to consider for the stereoscopic movie. We propose a new constraint for stereoscopic video alignment such that the
variations of disparities is also minimized. Experimental results have demonstrated that our method obtains better visual
effect than the state-of-the-art methods.
This article [J. Electron. Imaging.. 22, (2 ), 023012 (2013)] was originally published online on 10 May 2013 with an error on page 4. In several instances, the symbol H was omitted from the following passage:
With the rapid development of multispectral imaging technique, it is desired that the spectral color can be accurately reproduced using desktop color printers. However, due to the specific spectral gamuts determined by printer inks, it is almost impossible to exactly replicate the reflectance spectra in other media. In addition, as ink densities can not be individually controlled, desktop printers can only be regarded as red-green-blue devices, making physical models unfeasible. We propose a locally adaptive method, which consists of both forward and inverse models, for desktop printer characterization. In the forward model, we establish the adaptive transform between control values and reflectance spectrum on individual cellular subsets by using weighted polynomial regression. In the inverse model, we first determine the candidate space of the control values based on global inverse regression and then compute the optimal control values by minimizing the color difference between the actual spectrum and the predicted spectrum via forward transform. Experimental results show that the proposed method can reproduce colors accurately for different media under multiple illuminants.
Light source calibration is an important issue in many computer vision fields such as photometric stereo and shape from shading. Spheres, with either diffuse or specular reflections, are frequently deployed as calibration objects to recover the direction and intensity of light source from images. We present a novel method for light source calibration by using a planar mirror with a chessboard pattern and a diffuse region. The light direction can be accurately estimated from one mirror orientation by recovering the normal direction of the mirror plane. The location and intensity of light source can be further estimated if two mirror orientations are used. Experimental results show that the calibration accuracy of the proposed method is much higher than traditional sphere-based techniques and can offer improved three-dimensional reconstruction in photometric stereo.
In multispectral imaging systems, the accuracy of reflectance estimation can be degraded by the nonlinearity in imaging process, which is due to non-Gaussian distribution of the data and nonlinear optoelectronic conversion function of the camera. To deal with nonlinearity, we propose to extend camera responses by high-order polynomials and reduce the overfitting problem by partial least-squares (PLS) regression. Experiment shows that, in terms of both spectral and colorimetric error metrics, the proposed method performs better than Wiener estimation and ordinary polynomial regression, and is similar to polynomial regression with regularization.
Two methods for colorimetric characterization of color scanner are proposed based on the measures of perceptual color difference error. The first method is used to minimize the total color differences between the actual and predicted color samples. The second one, which is a generalization of the existing cubic-root preprocessing technique, derives the mapping between the p'th root of scanner responses and Commission Internationale de l'Eclairage L*a*b* (CIELAB) values. The experiment results indicate that the color accuracies of the proposed methods, especially the second one, are better than those of the traditional CIE XYZ (CIEXYZ)-space-based characterization methods.
Fusion of texture and color is to simulate color texture images that are perceptually very close to the actual ones. There are three computational models, i.e., gray-to-color mapping (GCM), color-to-color mapping (CCM), and dichromatic-based (DICH) models. The CCM model is extended to three methods, namely, CCM-RGB, CCM-LCH, and CCM-l, when it is applied in different color spaces. The DICH model contains two methods: DICH-GC and DICH-CC, considering the original image can be either gray scale or color. The color fidelity of these six methods is comparatively investigated in terms of image similarity between simulated and target images.