Texture fusion is essential for three-dimensional photo-realistic texture model reconstruction. Multi-resolution texture fusion mainly is applied to reconstruct three-dimensional models that the local texture is very realistic. This paper presents a multi-resolution texture fusion algorithm based on digital image processing. The technique utilizes the depth camera to obtain range data and the texture camera to obtain the local interested high-resolution texture of the object. And each point of the range images a pixel of the picture corresponding to this view should be found, so the map between high-resolution texture and local three-dimensional points could be obtained by calibrating texture camera and depth camera to register the local high-resolution texture with the low resolution texture collected by the depth camera or calibrating texture camera and depth camera. ICP is applied to register models, and mapping different-resolution textures onto three-dimensional models. Then the light source correction of textures is applied to remove the systematic differences such as lighting change which are achieved by computing linear regression correction mode. Finally, the global correction is applied to refine the observable variations in color that may still exist near the fusion boundaries. These corrections will utilize the grid triangle vertex color as a constraint to drive texture fusion to remove discontinuities from different resolutions. The advantage of this technique is that it utilizes the light source correction and global correction to fuse different resolutions textures. Experiments with this technique indicate that it significantly corrects the observable discontinuities within the overlapping areas, which are given from different resolutions, lighting change, non-lambertian object surface, etc.
Laser scanning is widely used in on-line industrial 3D inspection, cultural heritage conservation and reverse engineering.
However, in the traditional laser scanning, the most widely-used approach is based on the projection of a single directional
laser stripe over an object. Because the width of the laser stripe is physically difficult to compress enough to be fine at the
edge of the object, the traditional measurement method is not accurate for edge measurements. This paper proposes an
edge sensitive 3D measurement system which is fast and accurate, using two directional laser stripes scanning with a laser
projector. Scanning metal edge steps and complex surface edge with this system only require a single scanning to perform
3D reconstruction. So this scanning method has the advantages of high efficiency, high speed and edges with high precision.
Fringe projection profilometry (FPP) has been one of the most popular non-contact methods for 3D surface measurement
in recent years. In FPP, the quality of the fringe pattern determines the measurement accuracy and measurement range to
a great extent. In this paper, we proposed a high-quality fringe projection method using a biaxial MEMS scanning mirror
and a laser diode (LD). The fringe pattern is produced by a very low NA (numerical aperture) scanning laser beam.
Compared with pixel array based fringe pattern generation method, such as DLP and LCOS, the generation method can
produce higher performance fringe pattern, which is high contrast, narrow pitch and long depth. In this paper, we also
did a contrast between different fringe pattern generation methods.
The Fringe Projection System (FPS) and the Laser Stripe Projection System (LSPS) both have the limitations in 3D
measurements. For a shiny and diffusive surface with complex shape, neither of the systems could manage it individually
at a low cost. To overcome these difficulties, we propose a method combining these two ways of projections together
using a laser projector, which could project fringe patterns and scanning-laser-stripes both. In this method, we obtain two
disparity maps and two quality maps by FPS and LSPS, respectively. Then combine two disparity maps together by
quality maps and reconstruct the surface of the object with the combined disparity map. Real experiments are carried out
to verify the proposed method and to evaluate the system performance. The plain, the colored and the metal plastic
mixed objects are all reconstructed successfully in the proposed method.
In this paper, two different SPAD structures are designed and fabricated by 0.13μm CMOS technology. For the
structure-1, a guard ring with low implanted p-well is used at the edge of p<sup>+</sup> region, which prevents the periphery region
breakdown while for the structure-2 a “virtual” guard ring structure with a p<sup>-</sup> well under the whole P<sup>+</sup> region is designed.
The first structure exhibits a maximum photon detection probability of 15% and a typical dark count rate of 18 kHz at
room temperature while the second structure exhibits a maximum of photon detection probability of 28% and a dark
count rate of 23 kHz. These results would give a help for further advanced SPAD design.