<p>We propose a coarse-to-fine method automatically detecting facial landmarks for both rigid and nonrigid facial deformations. For an input 3-D face, first, we roughly detect the nose by employing 3-D local shape descriptors and further detect the tip of the nose with the tip features, e.g., the symmetry of the face. Second, we localize the eyes and mouth according to the distribution of human facial features and use a convolutional neural network to minimize the combined loss and to provide candidates for the corners of the eyes and mouth. Finally, for accurately detecting the landmarks of the eyes and mouth, we iteratively update the candidates by maximizing the similarity of the candidates and the landmarks based on the features of the candidate and its neighbors. Over Bosphorus and CASIA datasets, we evaluate the proposed method. Experiments show that compared with the state-of-the-art methods, our method detects the corners of eyes and mouth more accurately and robustly.</p>
Traditionally temporal phase unwrapping for phase measuring profilometry needs to employ the phase computed from unit-frequency patterned images; however, it has recently been reported that two phases with co-prime frequencies can be absolutely unwrapped each other. However, a manually man-made look-up table for two known frequencies has to be used for correctly unwrapping phases. If two co-prime frequencies are changed, the look-up table has to be manually rebuilt. In this paper, a universal phase unwrapping algorithm is proposed to unwrap phase flexibly and automatically. The basis of the proposed algorithm is converting a signal-processing problem into a geometric analysis one. First, we normalize two wrapped phases such that they are of the same needed slope. Second, by using the modular operation, we unify the integer-valued difference of the two normalized phases over each wrapping interval. Third, by analyzing the properties of the uniform difference mathematically, we can automatically build a look-up table to record the corresponding correct orders for all wrapping intervals. Even if the frequencies are changed, the look-up table will be automatically updated for the latest involved frequencies. Finally, with the order information stored in the look-up table, the wrapped phases can be correctly unwrapped. Both simulations and experimental results verify the correctness of the proposed algorithm.
Compared with grayscale patterns in structured light illumination, line stripes, as one binary pattern, are attractive for being more conveniently decoded. However, traditional line scanning typically involves a single line shifting across objects to avoid spatial ambiguities possibly introduced by complex surface geometry of objects. We propose a multiple-line strategy for fast and accurate reconstructing of the three-dimensional surfaces of targets. First, we build a mathematical model for multiline scanning, and second, we derive phase computation from the model along time axis and resolve the issue of ambiguity naturally. Errors of the proposed method are theoretically analyzed, and it shows that the proposed method is immune to nonlinearity and is robust to uncertainty. Experimental results demonstrate that the proposed method is of advantages both in accuracy and time cost.
Line structured light techniques, especially laser scanning, are preferred for commercialized three-dimensional (3-D) shape acquisition. Typically, a captured line stripe is to be detected spatially within a single image, and the detection may fail if there are ambiguities in the image. We present a decoding strategy for line structured light patterns. Over all recorded line-patterned images, by means of Fourier analysis along the time axis for each pixel, a phase map is computed and employed for 3-D reconstruction. The phase error is theoretically analyzed. Experimental results demonstrate that, compared with typical approaches based on spatial stripe peak detection, the proposed method achieves comparable accuracy and, most importantly, successfully handles the issue of ambiguities.
Measuring surfaces with high reflectivity variation via structured light illumination requires accurately identifying saturated pixels in captured images. However, conventional methods simply determine saturation by intensities, which is susceptible to a camera blurring effect and random noise. To solve this problem, we present a method that uses the magnitude of a nonprincipal frequency component to identify saturated pixels. Experimental results demonstrate that 1) higher accuracy of three-dimensional reconstruction can be achieved and 2) high-contrast surfaces can be accurately reconstructed.
In structured light illumination (SLI), the nonlinear distortion of the optical devices dramatically ruins accuracy of three-dimensional reconstruction when using only a small number of projected patterns. We propose a universal algorithm to calibrate these device nonlinearities to accurately precompensate the patterns. Thus, no postprocessing is needed to correct for the distortions while the number of patterns can be reduced down to as few as possible. Theoretically, the proposed method can be applied to any SLI pattern strategy. Using a three-pattern SLI method, our experimental results will show a 25× to 60× reduction in surface variance for a flat target, depending upon any surface smoothing that might be applied to remove Gaussian noise.