A RGB-D camera such as Microsoft Kinect can capture 3D depth data and color images simultaneously in real time. The main shortcoming is that the precision of the depth data is lower than other usual 3D scan systems. The color images with higher resolution can be used to compensate for such loss. In computer vision, shape from photometric stereo is used to capture shape from multiple images illuminated with different light sources. The details of the shape are represented by its local normal. In this paper, a controlled three light sources illumination system is designed to support Kinect sensor, and the normal maps are captured at 10HZ. An energy spline model is used to fuse the depth map and normal map, and results in a high quality shape. Some experiments are presented to verify the methods.
In automobile industry, flexible thin shell parts are used to cover car body. Such parts could have a different shape in a free state than the design model due to dimensional variation, gravity loads and residual strains. Special inspection fixtures are generally indispensable for geometric inspection. Recently, some researchers have proposed fixtureless nonridged inspect methods using intrinsic geometry or virtual spring-mass system, based on some assumptions about deformation between Free State shape and nominal CAD shape. In this paper, we propose a new fixtureless method to inspect flexible parts with a depth camera, which is efficient and low computational complexity. Unlike traditional method, we gather two point cloud set of the manufactured part in two different states, and make correspondences between them and one of them to the CAD model. The manufacturing defects can be derived from the correspondences. Finite element method (FEM) disappears in our method. Experimental evaluation of the proposed method is presented.
Optical devices are always used to digitize complex objects to get their shapes in form of point clouds. The results have no semantic meaning about the objects, and tedious process is indispensable to segment the scanned data to get meanings. The reason for a person to perceive an object correctly is the usage of knowledge, so Bayesian inference is used to the goal. A probabilistic And-Or-Graph is used as a unified framework of representation, learning, and recognition for a large number of object categories, and a probabilistic model defined on this And-Or-Graph is learned from a relatively small training set per category. Given a set of 3D scanned data, the Bayesian inference constructs a most probable interpretation of the object, and a semantic segment is obtained from the part decomposition. Some examples are given to explain the method.
With the progress in CAD (Computer Aided Design) systems, many mechanical components can be designed efficiently with high precision. But, such a system is unfit for some organic shapes, for example, a toy. In this paper, an easy way to dealing with such shapes is presented, combing visual perception with tangible interaction. The method is divided into three phases: two tangible interaction phases and one visual reconstruction. In the first tangible phase, a clay model is used to represent the raw shape, and the designer can change the shape intuitively with his hands. Then the raw shape is scanned into a digital volume model through a low cost vision system. In the last tangible phase, a desktop haptic device from SensAble is used to refine the scanned volume model and convert it into a surface model. A physical clay model and a virtual clay mode are all used in this method to deal with the main shape and the details respectively, and the vision system is used to bridge the two tangible phases. The vision reconstruction system is only made of a camera to acquire raw shape through shape from silhouettes method. All of the systems are installed on a single desktop, make it convenient for designers. The vision system details and a design example are presented in the papers.
3D reconstruction of objects from point clouds with a laser scanner is still a laborious task in many applications. Automating 3D process is an ongoing research topic and suffers from the complex structure of the data. The main difficulty is due to lack of knowledge of real world objects structure. In this paper, we accumulate such structure knowledge by a probabilistic grammar learned from examples in the same category. The rules of the grammar capture compositional structures at different levels, and a feature dependent probability function is attached for every rule. The learned grammar can be used to parse new 3D point clouds, organize segment patches in a hierarchal way, and assign them meaningful labels. The parsed semantics can be used to guide the reconstruction algorithms automatically. Some examples are given to explain the method.
The hand-held 3D laser scanner sold in the market is appealing for its port and convenient to use, but price is expensive. To develop such a system based cheap devices using the same principles as the commercial systems is impossible. In this paper, a simple hand-held 3D laser scanner is developed based on a volume reconstruction method using cheap devices. Unlike convenient laser scanner to collect point cloud of an object surface, the proposed method only scan few key profile curves on the surface. Planar section curve network can be generated from these profile curves to construct a volume model of the object. The details of design are presented, and illustrated by the example of a complex shaped object.
Coded structure light is widely used for 3D scanning, and different coding strategies are adopted to suit for different goals. In this paper, three coding strategies are compared, and one of them is selected to implement a low cost structure light scanner under the cost of €100. To reach this goal, the projector and the video camera must be the cheapest, which will lead to some problems related to light coding. For a cheapest projector, complex intensity pattern can’t be generated; even if it can be generated, it can’t be captured by a cheapest camera. Based on Gray code, three different strategies are implemented and compared, called phase-shift, line-shift, and bit-shift, respectively. The bit-shift Gray code is the contribution of this paper, in which a simple, stable light pattern is used to generate dense(mean points distance<0.4mm) and accurate(mean error<0.1mm) results. The whole algorithm details and some example are presented in the papers.
Freeform shape is usually designed by reverse engineering method thorough a 3D scanner, which is often expensive to most persons. The paper proposes a new scanning system combining shape from structured light and shape from silhouette, which can be implemented easily with low cost. The two methods are very complementary. For shape from silhouette, it can capture correct topological information of the object and obtain a closed envelop, and for shape from hand-held laser line, precise point clouds with some holes can be obtained. To gain their complementary advantages, a new data fusion strategy based a mesh energy functional is proposed to integrate the information from the two scanning methods, in which the points resulted from laser light will attract closed envelop from silhouette. After fusion, the precision of shape from silhouette is increased, and the topological error of shape from structured light is corrected. The design details are introduced, and a toy model is used to test the new method, which is difficult to scan using other systems. The test results proof the validity of the new method.
Although 3D scanning system is used more and more broadly in many fields, such computer animate, computer aided
design, digital museums, and so on, a convenient scanning device is expansive for most people to afford. In another
hand, imaging devices are becoming cheaper, a stereo vision system with two video cameras cost little. In this paper, a
hand held laser scanning system is design based on stereo vision principle. The two video cameras are fixed tighter, and
are all calibrated in advance. The scanned object attached with some coded markers is in front of the stereo system, and
can be changed its position and direction freely upon the need of scanning. When scanning, the operator swept a line
laser source, and projected it on the object. At the same time, the stereo vision system captured the projected lines, and
reconstructed their 3D shapes. The code markers are used to translate the coordinate system between scanned points
under different view. Two methods are used to get more accurate results. One is to use NURBS curves to interpolate the
sections of the laser lines to obtain accurate central points, and a thin plate spline is used to approximate the central
points, and so, an exact laser central line is got, which guards an accurate correspondence between tow cameras.
Another way is to incorporate the constraint of laser swept plane on the reconstructed 3D curves by a PCA (Principle
Component Analysis) algorithm, and more accurate results are obtained. Some examples are given to verify the system.
Camera calibration is a necessary step in 3D vision measurement. Classic technique uses a volume calibration pattern
with control points distributing in a 3D space, which is expensive to be made and difficult to be used everywhere.
Zhang proposed a planar pattern based calibration method, making the calibration process accessed to almost everyone.
Nowadays, consume camera with zoom lens is populated and always used for measurement. Such a camera has the
advantage to zoom the lens to get a maximum resolution images for objects with different sizes at an adequate distance.
The size of Zhang's planar pattern is fixed, not suitable for a zoom lens. In this paper, a multiple sheets planar pattern is
used to deal with this situation. The calibration pattern is made of 1-4 sheets of planar pattern distinguished with coded
markers, and the total size .can be modified by changing the distance between the planar sheets or the number of sheets
to suit for different lens setting. Such composed calibration pattern is easy used both for laboratory and industrial
situation. In the calibration process, one sheet of the planar pattern is first used to get an initial calibration according
Zhang's method, and also the initial coordinates of all the control points in the sheets of planar patterns. Then a bundle
adjustment algorithm is carried out incorporating of the planar constraints and distance constraints in the planar patterns
to obtain a set of precision parameters. Some examples are presented to show the effeteness of the method.
BRDF (Bidirectional Reflective Distribution Function)is broadly used in many fields, such as physics, heat
transformation, remote sensing, and computer graphics. Traditional methods to measure BRDF are expensive for most
peoples, and image based approach becomes a novel direction. Until now, for such an image based system, at least a
video camera and a still camera are indispensible, and the operations are not easily carried out under a convenient
condition. In this paper, a method using only one still camera is proposed, with the help of a light source, a cylinder
support, and a sphere. The material to be measured is painted on the sphere, putting on the cylinder support painted with
BRDF- known material. Around the cylinder support, a simple control points nets are distributed. In the measurement
process, the light source and the support are fixed, operators goes around the sphere to obtain pictures at different view
angles and the rest work is finished automatically by a set of programs. The pictures are first processed by a
photogrammetric program to get the geometry in the scene, including the positions, directions, and the shapes of light
source, the support, the sphere, and the cameras. The BRDF samples are calculated from the image intensity and the
obtained geometric relations, which are approximated by a multivariable spline to get a full BRDF description. Three
different materials are tested with the method.
Plastic is used widely for its cheapness and light weight. In the manufacture of plastic products, optical measurement is
always adopted for inspection purpose. There are three phases for a plastic part to be made. First, the rude injection mold
must be refined to generate qualified products; then, the technical parameters of the injection machine must be adjusted
to produce conforming products efficiently; in the end, the finished products are given. For each phase, optical devices
are adopted to obtain point clouds of plastic samples, which are compared with the CAD model, so a registration
operation is needed to align the point clouds with the CAD model. In this paper, three different evaluation metrics for
registration are put forward for each manufacture phase to meet its special demands. In mold modification phase, a most
overlap metric is used to find out the most distortion regions of the mold. In technical parameters adjustment phase, a
combined weighted overlap metrics are used to evaluate how close the plastic samples to the CAD model. In the
production phase, the samples are placed over a fixture, and a simple feature based registration is used. For each
evaluation metric, a suitable algorithm is developed to realize the registration operation. A car's interior panel is used
to verify the idea, and the test results proof the validity of the method.
In the field of industrial product inspection, a CMM (Coordinate Measurement Machine) is indispensable to get high
precise dimensions, and it is tedious to inspect a complex shape by manual. For many products, high precise dimensions
are only needed on some special features, such as cylinders, holes, and plans. In this paper, an optical fringe
measurement system is implemented based on Gray code, and a Canon DSL camera with high resolution is adopted to
capture the projection patterns and the coded markers glued on the CMM. The range images from the optical
measurement system are automatically aligned with the CMM coordinate system through the coded markers. A greedy
feature fitting algorithm is used to processing the obtained points cloud, and the special features are extracted, which are
used to direct the CMM to obtain more precise parameters. In this integration system, the whole inspection procedure is
automated regardless of the existence of the CAD model of the product. The data from different sensors are fused
together by an overlap patch algorithm. As a result, the full surface is scanned, and the necessary precision is guaranteed
on some special locations. The design principle and workflow of the integration method are presented, and a detail
example is given.
A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an
increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage
conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information.
In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the
other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system
is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick
rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required,
and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method.
In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A
new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow
strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full
description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy
bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the
ground truth measurement.
Digital 3D models are now used everywhere, from traditional fields of industrial design, artistic design, to heritage
conservation. Although laser scan is very useful to get densely samples of the objects, nowadays, such an instrument
is expensive and always need to be connected to a computer with stable power supply, which prevent it from usage for
fieldworks. In this paper, a new semi-automatic 3D laser scan method is proposed using two line laser sources. The
planes projected from the laser sources are orthogonal, one of which is fixed relative to the camera, and the other can be
rotated along a settled axis. Before scanning, the system must be calibrated, from which the parameters of the camera,
the position of the fixed laser plane and the settled axis are introduced. In scanning process, the fixed laser plane and the
camera form a conventional structured light system, and the 3d positions of the intersection curves of the fixed laser
plane with the object can be computed. The other laser plane is rotated manually or mechanically, and its position can be
determined from the cross point intersecting with the fixed laser plane on the object, so the coordinates of sweeping
points can be obtained. The new system can be used without a computer (The data can be processed later), which make it
suitable for fieldworks. A scanning case is given in the end.
Proc. SPIE. 7513, 2009 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Process Technology
KEYWORDS: 3D modeling, Reverse modeling, 3D image reconstruction, 3D image processing, Image segmentation, Visual process modeling, Finite element methods, 3D scanning, Cameras, Reconstruction algorithms
In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by
reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods
to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D
models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single
camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20
images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from
the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed
into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches.
For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element
formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The
rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures
are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the
algorithm, and the result is exciting.
In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by
reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be
used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with
high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory
results. Although many research works have been done on how to combine the results of the two methods, no work has
been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and
photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital
cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and
full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued
with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid
on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be
used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points
of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan
module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to
determine the position of the laser. The laser scan results in dense points cloud which can be aligned together
automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise
energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design
details are introduced, and a toy cock is used to test the new method, and the test results proof the validity of the new