Open Access
5 January 2015 Three-dimensional measurement of large-scale texture-less bending plates
Suqin Bai, Jinlong Shi, Qiang Qian, Linbin Pang, Xin Shu
Author Affiliations +
Abstract
A three-dimensional (3-D) measurement method for large-scale bending plates is presented. The proposed method, which combines the advantages of laser and a vision measurement method, makes use of a 3-D scanner, a texture projector, and a laser total station. The 3-D scanner is used to measure multiple partial sections of a large-scale bending plate, the texture projector is used to project a textured pattern onto the bending plate to perform alignment, and the total station is used to correct the aligned result to obtain more accurate 3-D data. The performance and effectiveness are evaluated by experiments.

1.

Introduction

Three-dimensional (3-D) shape measurement of objects is very important in numerous applicative fields including industrial manufacturing, medical science, computer science, civil engineering, game production, and film-making. In most cases, the 3-D data of an object is desirable because it is necessary for its quantitative analysis. However, measuring the 3-D shape of an object, especially a large-scale object, is a complicated process in which a variety of problems are simultaneously present.

In recent years, although many 3-D measurement methods have been proposed, many of them are applied in special fields and have limitations, and most of the existing methods are designed to measure small objects.

At present, 3-D measurement techniques can be mainly divided into two categories: passive and active methods. Passive methods do not interfere with the measured object and only use a sensor to measure the radiance reflected or emitted by the object’s surface to infer its 3-D structure. So far, many passive methods are presented, and some of them can acquire the 3-D data of large scenes14 using feature-based alignment strategy. Passive methods require that the measured object have a rich texture. However, the measured objects in industrial manufacturing are usually texture-less. Therefore, passive methods are difficult to use in industrial manufacturing.

Active methods actively interfere with the measured object, either mechanically or radiometrically, and current active methods usually measure the 3-D shape by projecting special light to the measured object. Compared with passive methods, active methods are usually used in industrial manufacturing because they can acquire dense data in a rapid and reliable manner. Active methods can be roughly categorized into two types: time-of-flight (TOF) laser measurement and structured light measurement.

TOF laser measurement methods acquire the 3-D shape of an object using TOF based on the known speed of light. Although traditional TOF laser measurement techniques can obtain highly accurate 3-D data,5 it is usually very time-consuming to perform dense scanning for a large-scale object because a point-by-point scanning strategy is used. Recently, a TOF camera which adopts a whole field technique has been widely used. A TOF camera is a class of scannerless LIDAR in which the entire scene is captured with each laser or light pulse by measuring the TOF of a light signal between the camera and the object for each point, as opposed to point-by-point scanning with a laser beam.6,7 The methods with a TOF camera can obtain dense 3-D data in real time. However, the 3-D data captured with TOF cameras have very low data quality because the image resolution is rather limited and the level of random noise contained in the depth maps is very high.8,9 Thus, it is very difficult to accurately measure the 3-D shape of a large-scale object with a TOF camera in industry.

Structured light measurement is a technique which projects coded light patterns onto the measured object by a projector and simultaneously captures the projected scene by a camera. As clearly reported in some review papers,1013 structured light technique has been extensively studied for several decades. So far, many structured light techniques are proposed, such as coded grids projection methods,14 color-coded dots projection method,15 speckle projection methods,1619 the combination of speckle projection and digital image correlation,20,21 and coded fringe projection methods.10 Color-coded dots and grids cannot be increased as required, because similar local structures of dense dots and grids would lead to ambiguous image correspondences, thus it is difficult to obtain dense measurements by using these methods. Although random speckles are easy to generate via diffraction, it is difficult to locate and match accurate speckles in images, thus some noise will be produced. The combination of speckle projection and digital image correlation is a good technique to obtain dense measurement; however, it is usually slow since digital image correlation is performed for the entire image. Fringe projection methods can rapidly acquire dense measurement results by switching projection of the structural light fringes. Therefore, we adopt a fringe technique in our method. These techniques have been widely applied in many industrial fields.2228 However, only a limited number of investigations deal with large-scale measurements. If we try to measure a large-scale object, most of the existing methods will fail. In this paper, our goal is to deal with the problem of 3-D measurement for large-scale texture-less bending plates used in industrial manufacturing.

Currently, there are many large-scale objects to be measured in manufacturing. For example, both an aircraft skin and ship shell need to be measured during automatic production, and these applications can be considered as the measurement of large-scale bending plates. In these applications, the measurement speed is an important factor which will seriously affect the production rate. Furthermore, the measurement density is also a vital issue which will affect the manufacturing accuracy. Thus, the two factors must be well handled for large-scale measurements in practice. However, the size of objects to be measured in aircraft and ship building industry is usually very large, which presents a difficult question.

Recently, several studies of structured light techniques have been coducted regarding the 3-D measurement of large-scale objects. The basic idea of these methods is to measure multiple partial sections of a large-scale object and then align the different measured sections together. According to alignment strategies, the methods can be categorized into three types: surface geometries-based method,29,30 tracker-based method,31,32 marker-based method,33 and texture-projection method.34 The methods in Refs. 29 and 30 use a Kinect sensor, which is a depth camera based on structured light technique, capture partial sections of a large scene, and then align different captured sections by using a coarse-to-fine iterative closest point (ICP) algorithm. These methods require the measured scene which has complex surface geometries to use ICP algorithm. Barone et al.31 presents a methodology which uses a stereo tracker and a 3-D scanner. The 3-D scanner is used to measure multiple partial sections of a large object, and the stereo tracker is used to remotely track the scanner within a working volume. The tracker uses stereo images to detect the 3-D coordinates of retro-reflective infrared markers rigidly connected to the 3-D scanner, and then the alignment for different partial views is performed using the tracked 3-D coordinates of markers. However, since the infrared markers cannot be seen clearly by the tracker if the distance between the tracker and scanner becomes larger, the measurement range is limited. The methodology of Ref. 32, which is based on the integration of a robotic system with a 3-D scanner, provides accurate 3-D full-field measurements of the hull surface. The position of the robotic system around the hull shape is determined by a laser total station thus allowing the automatic multiview data registration into a common reference frame. Although this method can measure large objects, it is inconvenient for use because some markers need to be placed on the 3-D scanner and manual tracking for these markers is time-consuming when using the laser total station. Barone et al.33 proposes a marker-based method which needs to put fiducial markers on the measured object and measure the 3-D coordinates of fiducial markers to be used as references to align point clouds obtained by the 3-D scanner. Although this method can accurately align different sections, it is also inconvenient because special markers are adopted. Hébert34 presents a texture-projection method, where in order to obtain the pose of the 3-D scanner in a global coordinate system for partial view alignments, a set of fixed points are projected on the visible surface using a projector.

In this paper, our aim is to measure a large-scale texture-less bending plate. In the proposed method, we also use a 3-D scanner based on a structured light technique to measure multiple partial sections of a large-scale bending plate. The bending plate to be measured is texture-less, and there is usually no enough reliable texture to be used for alignment via featured-based method. Moreover, the surface geometries of the bending plate are usually simple, and the curvature of every point remains almost unchanged. Thus, we cannot align the different measured sections using an ICP algorithm which is based on surface geometries or a feature-based method which requires rich texture on the measured object. In addition, we also cannot use a marker-based method in which special markers are required, because markers cannot be placed on the measured object due to the high temperature in some industrial production, such as plate bending by line heating in shipbuilding. Therefore, to align different measured sections, we, here, present a texture-projection method which makes use of a texture projector to project a textured pattern onto the texture-less bending plate, and then adopts a feature-based method to align the measured sections.

In addition, for large-scale measurement, compared with point-by-point TOF laser methods, structured light methods have the advantages of high density and speed, but have the disadvantage of low accuracy due to image distortions, camera calibration errors, and partial view alignments. Conversely, compared with structured light methods, point-by-point TOF laser methods have the advantage of high accuracy, but have the disadvantage of low speed. Therefore, this paper exploits the integration of a 3-D scanner with a point-by-point TOF laser device. Here, we use a laser total station as a point-by-point TOF laser device, and the laser total station is used to measure some markers placed around the bending plate to improve the measurement accuracy by performing error compensation.

According to the above analysis, the proposed method has the following advantages: First, compared with surface geometries based method, our method can measure large-scale objects with simple surface geometries. Second, compared with feature-based method, our method can measure texture-less objects. Third, compared with the marker-based method, our method is more convenient because we do not need to manually place a lot of markers on the large object for alignment. Fourth, we present an error compensation method by combining laser measurement with a structured light measurement technique.

2.

Proposed Method

3-D scanners based on structured light techniques are suitable for dense and rapid measurement of small objects. To measure large-scale bending plates using a 3-D scanner, we, here, present a method that is based on texture projection, and this method is illustrated in Fig. 1. The measurement system includes three components: a texture projector, a laser total station, and a 3-D scanner. The system workflow is as follows: First, we place some markers around the edge of the plate to be measured and these markers should be measured both by the laser total station and by the 3-D scanner later. Second, the laser total station is used to measure the markers. Third, the texture projector is fixed and used to project a rich texture onto the plate. Fourth, the 3-D scanner is moved to measure multiple partial sections of the plate. Meanwhile, both the projected texture and markers are also captured by the 3-D scanner’s cameras. Fifth, feature extracting and matching are performed for the captured textured images to compute the 3-D scanner’s pose and register the measured partial sections into a world reference frame. Finally, to improve the accuracy of the measured result, the corresponding 3-D coordinates of markers measured, respectively, by the laser total station and by the 3-D scanner are used to perform error compensation. Next, we will elaborate the key problems of the proposed method.

Fig. 1

Method illustration.

JEI_24_1_013001_f001.png

2.1.

Single-View Scanning

We use the structured light method similar to that described in Ref. 31. Two cameras form a stereo vision system and the projector is uncalibrated and not directly involved in the measurement process in Ref. 31. Our system also adopts this architecture. The resolution that can be obtained by our method is 2200×1500, where 2200 and 1500 are the numbers of the vertical and horizontal lines switched by the projector, respectively. The accuracy is about 0.3  to 0.5mm/m, and the field of view is about 65 deg, which means that if the work distance is about 4 m, one measurement can range up to 3m×4m.

Since the accuracy of structured light systems is heavily influenced by the image distortions generated by a camera lens, we first calibrate the parameters of radial and tangential distortions,35,36 which are then used to correct the captured images using Brown’s distortion model.37

2.2.

Partial Sections Alignment

Measured partial sections of a large-scale bending plate, corresponding to different placements of the 3-D scanner within a bending plate, must be registered into a common reference frame to form a complete 3-D mesh.

According to the analysis in Sec. 1, we propose an alignment method which uses one texture projector to project a rich texture onto the measured bending plate. The alignment process can be mainly summarized as three steps:

  • Step 1: A rich texture is projected onto the plate using the texture projector and the stereo cameras of 3-D scanner are used to capture the images which are called texture images. While projecting the texture, the projector of the 3-D scanner is turned off (in standby mode) to prevent mutual interference between the scanner projector and the texture projector. Figure 2 shows the rich texture used in our method. We adopt random Chinese characters as the projected texture. The aim of this step is to obtain the texture images with a rich texture for alignment.

  • Step 2: The texture projector is turned off (in standby mode), and then the scanner is used to measure the partial section.

    After performing the above two steps, the scanner is moved forward and repeat step 1 and step 2, then carry out step 3.

  • Step 3: Feature points in the texture images are extracted and matched to register the different point clouds acquired by the 3-D scanner into a reference frame, which is the key problem.

Fig. 2

Rich texture projected onto the measured object.

JEI_24_1_013001_f002.png

The above process will repeat until the measurement of the whole bending plate is finished. Next, we will elaborate the key problems in the alignment.

2.2.1.

Extracting and matching feature points in texture images

We need to extract some reliable feature points in texture images for two adjacent measurements in order to align the measured partial sections. As Fig. 3 shows, there are two measurements at adjacent locations, which are denoted by G1 at time t1 and G2 at time t2, respectively. In this experiment, we find that the movement between G1 and G2 cannot be very large, and there is at least a 1 m overlapping area between G1 and G2 to extract enough feature points for alignment. We extract and match feature points in four steps: First, feature points are extracted in the texture images (namely IBGp1, IBGp2, IBGc1, and IBGc2. p1 and p2, respectively, denote the indices of the left and right images in G1, and c1 and c2, respectively, denote the indices of the left and right images in G2.) of G1 and G2 using a scale-invariant feature transform (SIFT) algorithm.38 Second, point matching is performed between IBGp1 and IBGp2, IBGc1 and IBGc2, and IBGp2 and IBGc1 by using the SIFT algorithm. Third, according to the feature point correspondences between IBGp1 and IBGp2 and that between IBGc1 and IBGc2, we reconstruct the 3-D points of texture captured at times t1 and t2. Finally, based on the matching between IBGp2 and IBGc1, we obtain some matched 3-D points between G1 and G2, which will be used to compute the transformation parameters of alignment.

Fig. 3

Extracting and matching feature points for alignment. Some common feature points between adjacent measurements, which are used to compute the alignment parameters, are matched. (a) Extracting feature points, (b) matching feature points, (c) reconstruct 3-D points, and (d) matching 3-D points.

JEI_24_1_013001_f003.png

Since it is a time-consuming task to extract and match feature points, we make use of the SIFT-based method38 in graphic processing unit (GPU) to accelerate the algorithm.

2.2.2.

Point cloud alignment

According to Sec. 2.2.1, we can acquire some matched 3-D points according to texture images between adjacent measurements G1 and G2, and those matched points can be used to compute the parameters for aligning the point clouds measured by the scanner. Suppose there are K matched 3-D points, respectively, denoted by pi1=(xip1,yip1,zip1) in G1 and pi2=(xip2,yip2,zip2) in G2, where i{1,,K}. The point cloud alignment is the equivalent to locating the relation between pi1 and pi2. Here, the relation between pi1 and pi2 can be represented by a rotation matrix R and a translation vector T, which can be, respectively, expressed as

Eq. (1)

R=(R11R12R13R21R22R23R31R32R33),

Eq. (2)

T=(T1,T2,T3)T.

If all the matched pairs pi1,pi2, where i{1,,K}, are correct and accurate, R and T could have been calculated as follows by considering the fact that the transformation is rigid. First, T is obtained by Eq. (3):

Eq. (3)

T=1K(pj2pj1).
Then we estimate the rotation R by Eq. (4):

Eq. (4)

Rpj1=pj2T,j{1,,K}.
Here, the rotation matrix R may be rewritten to give a new vector R:

Eq. (5)

R=(R11,R12,R13,R21,R22,R23,R31,R32,R33)T.

Next, we represent the equations of Eq. (4) as Eq. (6):

Eq. (6)

AjR=pj2T,j{1,,K},
where we define Aj by Eq. (7),

Eq. (7)

Aj=(xjp1yjp1zjp1000000000xjp1yjp1zjp1000000000xjp1yjp1zjp1).

Next, we let

Eq. (8)

A=(A1A2AK),

Eq. (9)

PT=(p12Tp22TpK2T),
and thus derive

Eq. (10)

AR=PT.

Then we solve for the rotation vector R using linear least squares method:

Eq. (11)

R=(ATA)1ATPT.

T and R could have been calculated according to the above-mentioned method. However, there may exist some erroneous pairs, also called outliers, which will influence the estimations of R and T. Therefore, we propose an algorithm for computing R and T based on random sample consensus (RANSAC) to ensure the robustness by removing these outliers, with the concrete process is shown in Algorithm 1.

Algorithm 1

Computing the translation vector T and rotation matrix R based on RANSAC.

Input: Matched point pairs pi1,pi2, i{1,,K}
Output: The translation vector T and rotation matrix R
1. Three matched point pairs are selected randomly, and T and R are calculated using Eqs. (3) and (11), respectively.
2. For other K-3 matched point pairs, compute the transformed point pi1 for pi1 according to T and R.
3. Compute the Euclidean distance di12 between pi1 and pi2
4. If di12δ, where δ is a threshold, the matching is an inlier, otherwise it is a outlier and should be removed.
5. Calculate and record the number of inliers computed using T and R.
6. Go to step 1, repeat the first step to the fifth step M times, and generate M sets.
7. Select the one with the most inliers as the best match from the M sets, and obtain the new matched point pairs pj1,pj2 in which the outliers are removed, where j{1,,N} and N is the number of inliers.
8. Recompute R and T according to the new matching pairs pj1,pj2 using Eqs. (3) and (11), respectively.

2.3.

Error Compensation

Although we can acquire the 3-D shape of a large object by the presented method, it is hard to ensure the accuracy due to some errors. Thus, in order to improve the accuracy, we use a laser total station to correct the result measured by the 3-D scanner via an error compensation method. The concrete procedure of error compensation is composed of the following steps: First, some markers are placed around the edge of the bending plate. The markers are sparsely distributed near the edge. These markers are not for alignment. In fact, they can be placed outside the measured object. Second, a laser total station is used to measure the 3-D locations of the markers in the laser coordinates system. Third, the 3-D locations of markers in the scanner coordinates system are measured by the stereo cameras of the 3-D scanner. Actually, this step is performed during the moving of the scanner. Finally, an error field is created according to the markers’ locations, respectively, measured using laser total station and 3-D scanner, and the aligned 3-D mesh measured by the 3-D scanner is corrected using the error field.

Since the two previous steps are much easier, they are not discussed in this paper. The third step will be elaborated in Sec. 2.3.1, and the final step will be described in Sec. 2.3.2.

2.3.1.

Locating markers using cameras

Here, a rectangular marker specially designed for the laser total station is shown as Fig. 4(a).

Fig. 4

Locate markers in image IO. (a) is an image patch with a marker, (b) is the binary image of (a), (c) is the connected region center of the binary image, and (d) is the accurate cross point.

JEI_24_1_013001_f004.png

As mentioned before, after finishing the 3-D measurement of a partial section, the projector of the scanner is turned off and the stereo cameras of the scanner are used to capture the markers around the measured plate without projecting a structured light and special texture, where the captured image is denoted by IO. Thus, there remains a key problem during the process of correction, namely how to accurately detect and locate the markers in image IO. Because the measured scene may be complex, it is difficult to obtain the accurate locations of the markers in images. We, therefore, present a coarse-to-fine method for marker center extraction, which can be decomposed into two crucial subproblems: detecting the approximate initial locations of markers and then locating the accurate positions according to the initial values. On one hand, with the recent rapid development of machine learning techniques, objects can be detected in images.39 Thus, the machine learning methods can be adopted to solve the subproblem of marker detection. On the other hand, the techniques of corner and blob features detection, by which we can even find the subpixel location of a feature point, have been widely applied in the field of computer vision. Therefore, we can use the techniques of locating corner and blob features to obtain accurate positions based on the approximate initial locations of markers.

We present a machine learning method to locate the markers, and the steps are as follows: First, we collect some positive sample images which include markers and negative sample images which do not include markers. Second, histograms of oriented gradients features40 for these samples are extracted. Finally, a support vector machine (SVM)41 classifier and detector are trained to locate the markers in one image.

In our experiment, we collected about 200 positive samples captured from different angles by cameras and 1000 negative samples. The marker is a silver-gray square label in which there are two cross lines, where the cross point is the marker center. Both the laser total station and 3-D scanner should accurately locate the cross point to perform error compensation. Because the measured scene is complex, we collected more negative samples in which all the circumstances were considered as much as possible. All of the positive and negative samples are fed to SVM to train a detector. After an SVM detector is trained, we use it to detect the markers in image IO using GPU.

However, SVM detector can only detect the approximate locations of markers. Figure 4(a) is a detected image patch with a marker using the SVM detector, and the image patch is denoted as Im. Next, we need to obtain the accurate marker center (namely the cross point) to ensure the accuracy of the correction. Here, we propose a novel method to solve this problem. First, adaptive binarization is used on Im [Fig. 4(a)] to generate a binary image patch [Fig. 4(b)]. Second, we calculate the center of the connected region for the binary image patch [Fig. 4(c)]. Here, the center of the connected region, which is denoted by Li=(xli,yli)T, is the estimation of the marker center. However, this estimation may be inaccurate. Thus, the accurate cross point of the marker should be extracted based on the initial value Li. The Harris corner algorithm42 and its homologous algorithm43 would have been good methods to detect corners if the cameras could clearly capture the corners. However, the cross in the marker captured by cameras is usually indistinct due to the small size of the marker. Therefore, corner detection algorithms are not good methods by which locate the marker center.

From Fig. 4(a), we can observe that the cross point is more like a blob than a corner because the cross lines are very unsharp. Thus, it is better to adopt the method of blob feature detection. According to the above analysis, we use the difference of the Gaussian44 method to detect the accurate cross point around the initial estimation Li [Fig. 4(d)].

2.3.2.

Correction using an error field

In order to correct the measured result by combining the laser total station and 3-D scanner, an error field is constructed by the difference of the markers’ locations, respectively, as measured by the scanner and the laser total station. We will elaborate the compensation method as follows.

We let piV [Fig. 5(b)] and piO [Fig. 5(a)], where i{1,,M}, denote the markers’ 3-D coordinates measured by the 3-D scanner and total station, respectively. If we want to refine the result measured by the scanner using the laser total station, piV and piO should be placed in the same coordinates system. In order to put piV and piO in the same coordinates system, we need to obtain the relation of the two coordinates systems. Suppose Rvo and Tvo are the rotation matrix and translation vector from the scanner coordinates system to the total station coordinates system, we can estimate Rvo and Tvo by the correspondences between piV and piO. If M is bigger than 3, an over-constrained system about Rvo and Tvo can be constructed, which is shown in Eq. (12). Thus, we can obtain Rvo and Tvo by solving the over-constrained system using the least square method:

Eq. (12)

piVRvo+Tvo=piO,i{0,,M}.

Fig. 5

Correction using an error field, for details see text. (a) piO, (b) piV, (c) piO and piVO, (d) triangulation of piO, (e) EiVO, (f) construct error field by triangulating piO and piVO, (g) P is a point on PCa to be refined, (h) construct a perpendicular line from P, (i) acquire the value used to correct P, and (j) generate refined point cloud using error field.

JEI_24_1_013001_f005.png

After Rvo and Tvo are estimated according to the correspondences between piV and piO, piV can be converted into the coordinates system of the total station. We let piVO denote the converted coordinates of point piV from the 3-D scanner to the laser total station by Rvo and Tvo. In theory, piVO [the circles in Fig. 5(c)] and piO [the dots in Fig. 5(c)] should be the same points in the coordinates system of the laser total station. However, because piV is inaccurately measured due to the errors of camera calibration and point clouds alignment, Rvo and Tvo cannot be accurately estimated, which results in the errors between piVO and piO [see Fig. 5(c)].

To correct the inaccurate result measured by the scanner, we first compute the errors EiVO [see Fig. 5(e)] between piVO and piO, and then construct an error field [see Fig. 5(f)]. If there is an appropriate function which can be estimated by EiVO to fit the error field, we can solve the easily problem of correction. However, because the appropriate function of the fitting cannot be found, it becomes a difficult issue to perform error compensation.

We, therefore, present a method to solve the problem of error compensation. In our method, the markers are evenly distributed around the measured bending plate (these markers are used for error field construction and can be placed outside the measured object), and we place the markers near the edge of the bending plate. The points piO, where i{0,,M}, are then triangulated to obtain some triangles [see Figs. 5(d), 5(e), and 5(f)]. An error field is constructed for every triangle. Let us take one triangle as an example to illustrate the procedure of constructing the error field. Supposing the three vertices of one triangle are pnO, where n{i,j,k}, the corresponding points converted from scanner are pnVO, and the corresponding errors between pnO and pnVO are EnVO which can be considered as three vectors in a 3-D space, we construct an error field by performing linear interpolation based on EnVO in the triangle. At present, the error field can be considered as the space surrounded by the triangle plane through pnVO and that through pnO [see Fig. 5(f)]. We can construct the error field for every triangle according to this method.

Next, we correct the result measured by the scanner using the constructed error field; the concrete process of the correction is shown in Figs. 5(g)5(j). Suppose there is an aligned point cloud, denoted by Qa, measured by the scanner, and P is a point on Qa [see Fig. 5(g)]. We explain the process of the correction for P. Our basic idea is to deform the point cloud Qa according to the error field, which will let Qa be nearer to the laser’s result, which is considered as the accurate data. We first construct two perpendicular lines from P to its nearest triangle plane formed by pnVO and pnO, n{i,j,k}, respectively [see Fig. 5(h)], where the feet of the two perpendicular lines are denoted as Cp and Fp [see Fig. 5(i)]. Finally, the vector FC formed by points Cp and Fp is considered as the error used to refine the point P [see Fig. 5(i)], and we move P according to FC to generate a new location of P. Similarly, the other points on Qa can be refined by using the error compensation method. Thus, the aligned point cloud Qa is deformed in the direction of the result measured by the laser total station [see Fig. 5(j)], which makes the deformed result more accurate.

The above-mentioned method of error compensation can greatly decrease the errors, though it cannot completely eliminate the errors.

3.

Results and Evaluations

3.1.

Experimental Results

An experimental system is designed to validate the performance and effectiveness of the proposed method. The experimental system is shown in Fig. 6. Figure 6(a) shows the system composition which includes a laser total station, a 3-D scanner based on the structured light technique, two texture projectors (Here, because the measured object is very large, two projectors are used.), and a server. The data generated by the scanner and laser total station are automatically fed into the server. During measurement, the 3-D scanner is moved by a locomotive system that facilitates the sliding of the scanner in front of the measured object over the floor. The sliding may take some time (about a minute and a half in our experiment). Here, the measured object is a large iron sheet which is convenient for testing and can be easily deformed. Due to the limitation of the laboratory field, the bending plate is erected using a bracket, as Fig. 6(b) shows. In our experimental system, the 3-D scanner is based on a structured light method and uses the stereo vision principle in order to capture the 3-D shape with high resolution and accuracy. The configuration is composed of one Epson EB-C1020XN video projector to generate vertical and horizontal black and white striped light patterns and two Cannon 600d single lens reflex (SLR) cameras to capture the images of the surface under structured lighting. The cameras have a resolution of 4512×3000pixels, and are equipped with lenses having a focal length of 35 mm.

Fig. 6

(a) The experimental system including a laser total station, a three-dimensional (3-D) scanner based on structured light technique, two texture projectors, and a server. (b) The bending place to be measured.

JEI_24_1_013001_f006.png

Here, two Epson EB-C1020XN video projectors are used to project rich texture onto the measured object, and one Leica FlexLine TS09 total station is used for error compensation.

In addition, we process the algorithm by a server which has one dual core 3.0G Hz CPU, 8G RAM and two GeForce GTX 690 NVIDIA graphics cards with 4096 MB GDDR5 memory.

We use SLR cameras instead of high speed video cameras in this experimental system, because SLR cameras have the advantages of high resolution and low price. It is vitally important to adopt high resolution cameras for measurement accuracy, and the price is low enough to be acceptable for an experimental system. We obtain the SDK, which is used for controlling the cameras to capture images automatically, from Cannon. Thus, the cameras can be considered as common video cameras except for a lower frame rate.

In this experiment, we validate the effectiveness of the method. Figure 7 shows the measurement results. Three tests are performed here, and the meshes in Fig. 7 are sampled from the dense point clouds. First, we let the central section of the iron sheet sunken, as Fig. 7(a) shows, where the measured area is about 7.5m×3m which is bigger than the size of a commonly used ship plate. Then, we measure the 3-D shape of the iron sheet by moving the scanner. Around the measured region, we placed markers at intervals of about 0.5 m. Figures 7(b)7(d) illustrate the measured results viewed from different angles. The different color means the different sections measured at different times. Second, we deform the iron sheet, as Fig. 7(e) shows. Figures 7(f)7(h) are the measured results of Fig. 7(e) seen from different angles. Third, ba ig deformation is performed on the iron sheet, as Fig. 7(i) shows. Figures 7(j)7(l) are the measured results of Fig. 7(i) seen from different angles.

Fig. 7

Aligned point clouds and different colors mean different sections measured at different times. (b), (c), and (d) illustrate the 3-D meshes viewed from different angles for (a). (f), (g), and (h) illustrate the 3-D meshes viewed from different angles for (e). (j), (k), and (l) illustrate the 3-D meshes viewed from different angles for (i).

JEI_24_1_013001_f007.png

3.2.

Assessment of Alignment Errors

The alignment heavily influences the measurement accuracy, thus the alignment error should be assessed. Here, the alignment error is defined by the average distances between corresponding points of different point clouds, and the bigger the average distance, the larger the alignment error. As Fig. 8 shows, given two different point clouds Q1 and Q2, suppose C1i is a point on Q1, C2j and C2k are the two points on Q2, we define the distances between two point clouds using the following steps: First, in Q1 we locate the closet point, denoted as C1i, to C2j, and compute the distance Dij between C1i and C2j. Second, in Q2 we compute the closet point, denoted as C2k, to C1i, and calculate the distance Dik between C2k and C1i, Finally, the distance from C1i to Q2 is defined as (Dij+Dik)/2. Similarly, the distance between each point CQ1 and Q2 can be acquired, and then the average distances can be computed in the overlapping region.

Fig. 8

Assess the point cloud alignment. Q1 and Q2 are the different point clouds.

JEI_24_1_013001_f008.png

Table 1 shows the average distances between adjacent scans and the corresponding standard deviation for the three tests shown in Fig. 7. From Table 1, we can see that all the absolute average accumulated errors for the three tests are less than 2 mm.

Table 1

The average distances between adjacent scans (mm).

Overlapping regionAverage distance for the first test (absolute value)Standard deviationAverage distance for the second test (absolute value)Standard deviationAverage distance for the third test (absolute value)Standard deviation
Q1, Q20.460.23360.480.25660.530.2437
Q2, Q30.590.31590.520.28710.570.3013
Q3, Q40.380.20170.430.26020.430.2155
Q4, Q50.540.29740.490.26530.460.2778
Accumulated error1.971.921.99

3.3.

Evaluation of Accuracy

The measurement accuracy is very critical for methods used in industry. Thus, in order to evaluate the accuracy, we carried out an experiment in which we attached two markers on the iron sheet and then measured the distance between the two markers. First, the distance between the two markers is measured by the laser total station. Because the measured distance measured by the laser total station is accurate enough for applications in industry, the distance is considered as the ground truth with which to evaluate the accuracy. Second, we measured the distance by scanner without error compensation. Third, we measured the distance by scanner with correction by the laser total station. We tested this 13 times by attaching two markers in different places.

Figure 9(a) shows that the error will increase when the measured distance increases. Figure 9(b) illustrates the error per meter, from which we can see that the error per meter is similar for all measurements. As can be seen from Figs. 9(a) and 9(b), the error after correction by error compensation is smaller than that before correction. In some industries, such as producing shell plates in shipbuilding, the accuracy demand is about 10 mm for a measured plate with a length of 5 m. Thus, our method, which can obtain very dense and more accurate measured results of a large bending plate, is valuable in some manufacturing fields.

Fig. 9

Curves of measurement error. (a) The error measured between two markers and (b) the error per meter.

JEI_24_1_013001_f009.png

3.4.

Evaluation of Speed

The speed of execution is also very important for a method used in industry. Because some algorithms in our method can be performed concurrently, a GPU technique is used to improve the computational speed. GPU is specialized for compute-intensive and highly parallel computations.45 The GPU architecture used in our method is shown in Fig. 10, where two GPUs are used, and each one performs the calculation for one image by decomposing the image into a grid of 8×8 thread blocks, with each thread computing one compute unified device architecture C function dealing with one image pixel. We use the same GPU architecture for all the algorithms except that the kernel function in the thread is replaced by different algorithms.

Fig. 10

Graphic processing unit architecture used in our method.

JEI_24_1_013001_f010.png

We tested the execution speed by implementing a CPU version and GPU version, respectively, and Table 2 shows the comparison. In this experiment, we give the execution time of one measurement using optical scanner, the time of aligning two point clouds and the time of compensation using laser total station. The time of capturing images and moving the scanner is not considered here. From Table 2, we can see the proposed method is fast enough for manufacturing from the perspective of the algorithm’s execution speed.

Table 2

Processing speed (ms).

GPU or CPUScannerAligning point cloudsCorrecting by laser total stationTotal
CPU11,02352921811,770
GPU70510888901

4.

Conclusions

We presents a 3-D measurement method for large-scale bending plates used in manufacturing, which combines the advantages of laser measurement and vision measurement. This method realizes the measurement of large-scale marker-less bending plates. Certainly, there exist some limitations for the current method: First, though our algorithm is fast enough, it takes a long time (about a minute and a half in our experiment) to move the scanner and take pictures by SLR cameras for measuring a large bending plate (about 7.5 m long in our experiment), which may be an impediment in practice. Second, the location of a current measurement cannot be far from the previous measurement using the scanner, and there is at least a 1 m overlapping area between two measurements in order extract enough feature points for alignment. However, we think that although there are some limitations, this will depend on the type of application. In the future, we will solve the above-mentioned problems by using better cameras, projectors, and control devices.

Acknowledgments

The research work presented in this paper is supported by innovation funds of industry-academy-research cooperation of Jiangsu Province (Grant No. BY2013066-03) and Natural Science Foundation of Jiangsu Province of China (Grant No. BK20130473). Thanks to Jinliang Shi and Jun Yin for providing their supports of techniques.

References

1. 

J. Shiet al., “Reconstruction of dense three-dimensional shapes for outdoor scenes from an image sequence,” Opt. Eng., 52 (12), 123104 (2013). http://dx.doi.org/10.1117/1.OE.52.12.123104 OPEGAR 0091-3286 Google Scholar

2. 

S. Agarwalet al., “Building Rome in a day,” Commun. ACM, 54 (10), 105 –112 (2011). http://dx.doi.org/10.1145/2001269 CACMA2 0001-0782 Google Scholar

3. 

Y. Furukawaet al., “Towards Internet-scale multi-view stereo,” in 2010 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 1434 –1441 (2010). Google Scholar

4. 

Q. Shanet al., “The visual turing test for scene reconstruction,” in 2013 Int. Conf. on 3DTV-Conf., 25 –32 (2013). Google Scholar

5. 

R. Kurazumeet al., “3D laser measurement system for large scale architectures using multiple mobile robots,” in Sixth Int. Conf. on 3-D Digital Imaging and Modeling, 2007 (3DIM’07), 91 –98 (2007). Google Scholar

6. 

G. IddanG. Yahav, “Three-dimensional imaging in the studio and elsewhere,” 48 –55 (2001). Google Scholar

7. 

G. YahavG. IddanD. Mandelboum, “3D imaging camera for gaming application,” in Digest of Technical Papers. Int. Conf. on Consumer Electronics, 2007 (ICCE 2007), 1 –2 (2007). Google Scholar

8. 

S. Schuonet al., “Lidarboost: depth superresolution for TOF 3D shape scanning,” in IEEE Conf. on Computer Vision and Pattern Recognition, 2009 (CVPR 2009), 343 –350 (2009). Google Scholar

9. 

Y. Cuiet al., “3D shape scanning with a time-of-flight camera,” in 2010 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 1173 –1180 (2010). Google Scholar

10. 

J. Salviet al., “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit., 43 (8), 2666 –2680 (2010). http://dx.doi.org/10.1016/j.patcog.2010.03.004 PTNRA8 0031-3203 Google Scholar

11. 

S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng., 48 (2), 149 –158 (2010). http://dx.doi.org/10.1016/j.optlaseng.2009.03.008 OLENDN 0143-8166 Google Scholar

12. 

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon., 3 (2), 128 –160 (2011). http://dx.doi.org/10.1364/AOP.3.000128 AOPAC7 1943-8206 Google Scholar

13. 

Z. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng., 50 (8), 1097 –1106 (2012). http://dx.doi.org/10.1016/j.optlaseng.2012.01.007 OLENDN 0143-8166 Google Scholar

14. 

A. O. UlusoyF. CalakliG. Taubin, “Robust one-shot 3D scanning using loopy belief propagation,” in 2010 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 15 –22 (2010). Google Scholar

15. 

D. DesjardinsP. Payeur, “Dense stereo range sensing with marching pseudo-random patterns,” in Fourth Canadian Conf. on Computer and Robot Vision, 2007 (CRV’07), 216 –226 (2007). Google Scholar

16. 

G. Fenget al., “Laser speckle projection tomography,” Opt. Lett., 38 (15), 2654 –2656 (2013). http://dx.doi.org/10.1364/OL.38.002654 OPLEDP 0146-9592 Google Scholar

17. 

J. Garciaet al., “Three-dimensional mapping and range measurement by means of projected speckle patterns,” Appl. Opt., 47 (16), 3032 –3040 (2008). http://dx.doi.org/10.1364/AO.47.003032 APOPAI 0003-6935 Google Scholar

18. 

F. Zhuet al., “Accurate 3D measurement system and calibration for speckle projection method,” Opt. Lasers Eng., 48 (11), 1132 –1139 (2010). http://dx.doi.org/10.1016/j.optlaseng.2009.12.016 OLENDN 0143-8166 Google Scholar

19. 

M. Grosseet al., “Fast data acquisition for three-dimensional shape measurement using fixed-pattern projection and temporal coding,” Opt. Eng., 50 (10), 100503 (2011). http://dx.doi.org/10.1117/1.3646100 OPEGAR 0091-3286 Google Scholar

20. 

F. J. YangX. Y. He, “Digital speckle projection for vibration measurement by applying digital image correlation method,” Key Eng. Mater., 326 99 –102 (2006). http://dx.doi.org/10.4028/www.scientific.net/KEM.326-328 KEMAEY 1013-9826 Google Scholar

21. 

B. Panet al., “Improved speckle projection profilometry for out-of-plane shape measurement,” Appl. Opt., 47 (29), 5527 –5533 (2008). http://dx.doi.org/10.1364/AO.47.005527 APOPAI 0003-6935 Google Scholar

22. 

Z. Huet al., “Computer vision for shoe upper profile measurement via upper and sole conformal matching,” Opt. Lasers Eng., 45 (1), 183 –190 (2007). http://dx.doi.org/10.1016/j.optlaseng.2006.04.004 OLENDN 0143-8166 Google Scholar

23. 

Z. Liuet al., “Simple and fast rail wear measurement method based on structured light,” Opt. Lasers Eng., 49 (11), 1343 –1351 (2011). http://dx.doi.org/10.1016/j.optlaseng.2011.05.014 OLENDN 0143-8166 Google Scholar

24. 

Q. Zhanget al., “3-D shape measurement based on complementary gray-code light,” Opt. Lasers Eng., 50 (4), 574 –579 (2012). http://dx.doi.org/10.1016/j.optlaseng.2011.06.024 OLENDN 0143-8166 Google Scholar

25. 

J. Guet al., “Compressive structured light for recovering inhomogeneous participating media,” in Computer Vision–ECCV 2008, 845 –858 (2008). Google Scholar

26. 

N. SilbermanR. Fergus, “Indoor scene segmentation using a structured light sensor,” in 2011 IEEE Int. Conf. on Computer Vision Workshops (ICCV Workshops), 601 –608 (2011). Google Scholar

27. 

S. ChenY. LiJ. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Trans. Image Process., 17 (2), 167 –176 (2008). http://dx.doi.org/10.1109/TIP.2007.914755 IIPRE4 1057-7149 Google Scholar

28. 

J. Xuet al., “Rapid 3D surface profile measurement of industrial parts using two-level structured light patterns,” Opt. Lasers Eng., 49 (7), 907 –914 (2011). http://dx.doi.org/10.1016/j.optlaseng.2011.02.010 OLENDN 0143-8166 Google Scholar

29. 

R. A. Newcombeet al., “Kinectfusion: real-time dense surface mapping and tracking,” in 2011 10th IEEE Int. Symp. on Mixed and Augmented Reality (ISMAR), 127 –136 (2011). Google Scholar

30. 

S. Izadiet al., “Kinectfusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proc. of the 24th Annual ACM Symp. on User Interface Software and Technology, 559 –568 (2011). Google Scholar

31. 

S. BaroneA. PaoliA. V. Razionale, “Shape measurement by a multi-view methodology based on the remote tracking of a 3D optical scanner,” Opt. Lasers Eng., 50 (3), 380 –390 (2012). OLENDN 0143-8166 Google Scholar

32. 

A. PaoliA. V. Razionale, “Large yacht hull measurement by integrating optical scanning with mechanical tracking-based methodologies,” Rob. Comput. Integr. Manuf., 28 (5), 592 –601 (2012). http://dx.doi.org/10.1016/j.rcim.2012.02.010 RCIMEB 0736-5845 Google Scholar

33. 

S. BaroneA. PaoliA. V. Razionale, “Three-dimensional point cloud alignment detecting fiducial markers by structured light stereo imaging,” Mach. Vision Appl., 23 (2), 217 –229 (2012). http://dx.doi.org/10.1007/s00138-011-0340-1 MVAPEO 0932-8092 Google Scholar

34. 

P. Hébert, “A self-referenced hand-held range sensor,” in Proc. of the Third Int. Conf. on 3-D Digital Imaging and Modeling, 5 –12 (2001). Google Scholar

35. 

J. HeikkilaO. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proc. 1997 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1106 –1112 (1997). Google Scholar

36. 

Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proc. of the Seventh IEEE Int. Conf. on Computer Vision, 666 –673 (1999). Google Scholar

37. 

D. Brown, “Decentering distortion of lenses,” Photometric Eng., 32 (3), 444 –462 (1966). Google Scholar

38. 

C. Wu, “Sift on GPU (siftGPU),” (2007) http://cs.unc.edu/~ccwu/siftgpu/ Google Scholar

39. 

Y. SaidM. AtriR. Tourki, “Human detection based on integral histograms of oriented gradients and SVM,” in 2011 Int. Conf. on Communications, Computing and Control Applications (CCCA), 1 –5 (2011). Google Scholar

40. 

N. DalalB. Triggs, “Histograms of oriented gradients for human detection,” in IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2005 (CVPR 2005), 886 –893 (2005). Google Scholar

41. 

T. Joachims, “Making large scale SVM learning practical,” Advances in Kernel Methods, 169 –184 MIT Press, Cambridge (1999). Google Scholar

42. 

C. HarrisM. Stephens, “A combined corner and edge detector,” in Alvey Vision Conf., 50 (1988). Google Scholar

43. 

J. ShiC. Tomasi, “Good features to track,” in Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1994 (CVPR’94), 593 –600 (1994). Google Scholar

44. 

D. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision, 60 (2), 91 –110 (2004). http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94 IJCVEQ 0920-5691 Google Scholar

45. 

W. Wen-mei, GPU Computing Gems Emerald Edition, Morgan Kaufmann, Burlington, Massachusetts (2011). Google Scholar

Biography

Suqin Bai received a master’s degree from the Computer Science and Engineering Department of the Ship Building Institute of East China, Zhenjiang, China, in 2002. She is currently an associate professor with Jiangsu University of Science and Technology, Zhenjiang, China. Her main research interests include computer vision and image processing.

Jinlong Shi received a PhD degree from the Computer Science and Engineering Department of Fudan University, Shanghai, China, in 2012. He is currently an associate professor with Jiangsu University of Science and Technology, Zhenjiang, China. His main research interests include computer vision and computer graphics.

Qiang Qian received a master’s degree from the Computer Science and Engineering Department of Jiangsu University of Science and Technology, Zhenjiang, China, in 2013. He is currently a lecturer with Jiangsu University of Science and Technology, Zhenjiang, China. His main research interests include computer vision and image processing.

Linbin Pang received a master’s degree from the Computer Science and Engineering Department of Hehai University, Nanjing, China, in 2003. He is currently a lecturer with Jiangsu University of Science and Technology, Zhenjiang, China. His main research interests include optical engineering and computer graphics.

Xin Shu received a PhD degree from the Computer Science and Engineering Department of Jiangnan University, Wuxi, China, in 2012. He is currently a lecturer with Jiangsu University of Science and Technology, Zhenjiang, China. His main research interests include computer vision and machine learning.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Suqin Bai, Jinlong Shi, Qiang Qian, Linbin Pang, and Xin Shu "Three-dimensional measurement of large-scale texture-less bending plates," Journal of Electronic Imaging 24(1), 013001 (5 January 2015). https://doi.org/10.1117/1.JEI.24.1.013001
Published: 5 January 2015
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
3D scanning

Scanners

3D image processing

3D metrology

Projection systems

Cameras

Laser scanners

Back to Top