Three-dimensional (3-D) sensing capability can expand its application areas in industrial automation. Many approaches have been proposed for 3-D sensing.1 The typical approach is to use a stereo system consisting of two cameras. In stereo, it has difficulty in finding correspondence between images because brightness values depend heavily on the reflectivity of material, the characteristics of camera and the illumination. A structured light stripe system has an advantage in finding correspondence because active lighting by laser makes finding correspondence easy. Therefore, it has preference over stereo system in industrial automation.
The first necessary step for 3-D acquisition using a structured light stripe system is to find the relative pose between camera and laser. It is usually called extrinsic calibration. Chen and Kak2 proposed an algorithm that gives the transformation matrix that converts points on the image plane into corresponding 3-D coordinates with respect to the camera. They use known world lines and corresponding image points to compute the transformation matrix, and it requires at least six world lines. Leid3 extended Chen and Kak’s algorithm using known world plane and image point correspondence. Huynh4 proposed a method using the invariance of cross-ratio of four points to compute world points on the light stripe plane, and it uses two orthogonal planes. Zhou and Zhang5 further extended Huynh’s algorithm by partially automating the computation of control points on image, and they use one plane.
All these methods compute the transformation matrix from image to camera coordinate. 3-D information is computed only with respect to the camera because they cannot compute the pose of the laser with respect to the camera directly. In this paper, we propose a calibration algorithm that can compute the relative pose of the laser with respect to the camera using two planes where one has multiple slits. 3-D information is computed by finding cross point of camera ray and laser stripe plane.6 We can convert 3-D coordinates from camera to laser because we know the relative pose between them. It can have flexibility in applications where a structured light stripe system is used attached on a robot’s arm because we can convert 3-D information with respect to the coordinate system we want.
Extrinsic Calibration Using Plane with Slits
Out proposed method uses two planes where one plane with multiple slits is set perpendicular to the other plane as shown in Fig. 1. represents the laser coordinate system and corresponds to the camera coordinate system. corresponds to the world coordinate system.
We attach chessboard pattern on both planes. Control points on two planes are used for the calibration of the camera using Tsai algorithm.7 After camera calibration, we can convert 3-D coordinate of a point with respect to the world coordinate system into the camera coordinate system.
The laser stripe plane that passes through slits on the vertical plane would form separated lines on each plane. In Fig. 1, four lines formed by point pairs , , and would meet at a point that corresponds to the origin of the laser. We manually select their positions with respect to the world coordinate system. Then, we convert them into the camera coordinate system using camera’s calibration information. The relative location of the laser origin with respect to the camera coordinate system can be computed by finding the crossing point of lines. Lines will not meet at one point due to the error during manual selection. Therefore, we compute the cross point using least-square estimation.
A 3-D line that passes through two points and can be represented as:
Up to now, we computed the position of laser origin with respect to the camera coordinate system. We obtained the translation, , of the laser coordinate system with respect to the camera coordinate system.
Next, we compute the rotation, , of the laser coordinate system with respect to the camera coordinate system. In Fig. 1, two planes are perpendicular so that the laser lines on different plane would have the same property. We select two lines from each plane and set them as the two axes of laser coordinate system. For example, we select the two lines and in Fig. 1 as the two axes, , , of the laser coordinate system. The coordinate of each point on line is obtained with respect to the camera coordinate system using camera’s calibration information. Third axis, , of the laser coordinate system can be computed by cross product of two axes.
In Fig. 1, two planes are perpendicular so that laser lines on different plane would have the same property. We select two lines from each plane and set them as the two orthogonal axes of laser coordinate system. For example, we select two lines and in Fig. 1 as the two axes, , , of the laser coordinate system. Coordinates of each point are set manually on the image. The coordinate of each point on line is obtained with respect to the camera coordinate system using camera’s calibration information. The length of each is normalized to have one. Third axis, , of the laser coordinate system can be computed by cross product of two axes. Throughout this, we obtained three orthogonal axes with respect to the camera coordinate system that correspond to the rotation, , between the camera and laser coordinate system.
Finally, we can compute the relative pose of the rotation and translation, , , of the laser coordinate system with respect to the camera coordinate system. Also, we can represent the laser scan plane with respect to the camera coordinate system. 3-D coordinates of laser points on the image are computed by finding the crossing point of camera ray and laser stripe plane.6
It is convenient if one axis of the laser coordinates is aligned with the center of laser scan plane in the real application. Two axes on the laser scan plane that are obtained by the proposed algorithm do not coincide with the center of laser stripe plane. They correspond to two axes on the calibration structure as shown in Fig. 1. In Fig. 2, green axes represent the two axes computed after extrinsic calibration and the two red axes are desirable ones for the application. There is rotation, , between two coordinate systems as shown in Fig. 2. It can be recovered if we know the scan angle of laser and the coordinate of one point of two end points of laser stripe plane with respect to the world coordinate system. If we know the position of that is one end point of laser, then we can compute the direction vector using known scan angle of laser. The rotation angle between the two coordinate system can be computed using the two direction vectors of and .
In our experiments, extrinsic calibration of the camera and slit laser system is done several times by changing the relative pose of the camera and the laser. Figure 3 shows the calibration setup for experiments consisting of camera, laser, and calibration structure. The plane with slits is placed perpendicular to the ground plane. A chessboard pattern is attached on each plane as shown in Fig. 4(a) where we attach chessboard to avoid blocking the slit area on the vertical plane.
A chessboard pattern is attached on each plane. Figure 4(a) shows control points for the calibration of the camera. Two-dimensional (2-D) coordinates of control points on the image are refined into subpixel accuracy.
Three pairs of points are used for the determination of the location and orientation of the laser coordinate system with respect to the camera coordinate system. The coordinate of each point is measured manually with respect the world coordinate and it is converted into camera coordinate using camera’s calibration information.
Table 1 shows computed extrinsic parameters of rotation and translation between camera and laser by proposed algorithm. Number 1 to 3 is the result under similar mechanical configurations. We tried to locate the camera and laser at the same position as much as possible whenever we assembled them. We can notice that results of three configurations have a similar value. Number 4 to 6 is the result after changing the relative pose of camera and laser. We reduced the field of view of camera so that the relative position of the camera has been changed. Those changes can be verified in the result of extrinsic calibration.
Result of extrinsic calibration between camera and laser.
|Rotation (deg) (Rx, Ry, Rz)||Translation (mm) (Tx, Ty, Tz)|
|No. 1||(−130.2, −38.9, −59.9)||(−63.7, 0.0, 163.3)|
|No. 2||(−131.3, −40.3, −60.7)||(−73.5, −2.2, 162.2)|
|No. 3||(−144.4, −50.4, −43.9)||(−70.0, 4.7, 163.4)|
|No. 4||(−138.5, −46.6, −50.6)||(−63.7, 1.1, 243.8)|
|No. 5||(−129.8, −40.8, −62.1)||(−67.8, 0.3, 233.2)|
|No. 6||(−134.4, −45.3, −54.8)||(−81.6, 4.2, 343.6)|
Table 2 shows the accuracy of estimated 3-D points by the laser and camera system. A 3-D coordinate is obtained by finding the cross point between camera ray and laser stripe plane. We compute 3-D coordinates of ten control points that are chosen manually as shown in Fig. 4(b). We compute the length between two points and compare it with the true one. Absolute and relative error is computed. We have 3-D accuracy with less than 5% relative error in all configurations. We have consistent results through the whole experiment.
The accuracy of estimated 3-D points by laser and camera system.
|Absolute error (mm) (mean/std)||Relative error (%) (mean/std)|
We also applied the proposed calibration algorithm in the application of the alignment of two parts using four 3-D sensing systems as shown in Fig. 5. We used the same kind of camera and laser for the configuration of four 3-D sensing systems. We used the proposed algorithm for the extrinsic calibration of each 3-D sensing system. Table 3 shows the result of extrinsic calibration of each system. We tried to configure each system to be the same as possible. But, there could be small differences introduced during the assembly of camera and laser. The results of our extrinsic calibrations of four sensor systems gives similar values. The accuracy of four sensor systems is evaluated as the same procedure using the previously mentioned method. 3-D reconstruction accuracy of four sensor systems have also similar tendencies as shown in Table 4. We can conclude that proposed algorithm gives a consistent result.
Result of extrinsic calibration of four sensing systems.
|Rotation (deg) (Rx, Ry, Rz)||Translation (mm) (Tx, Ty, Tz)|
|Sensor 1||(−143.3, −27.5, −57.6)||(−117.6, 2.8, 113.8)|
|Sensor 2||(−143.4, −27.6, −58.2)||(−119.1, 3.1, 106.5)|
|Sensor 3||(−143.1, −27.3, −58.2)||(−118.3, 3.1, 111.0)|
|Sensor 4||(−143.8, −27.4, −57.9)||(−122.0, 4.7, 109.0)|
The accuracy of estimated 3-D points of four sensor systems.
|Absolute error (mm) (mean/std)||Relative error (%) (mean/std)|
In this paper, we presented a new algorithm for the extrinsic calibration of a camera and slit laser system using a calibration structure consisting of two perpendicular planes where one has multiple slits. Our proposed algorithm can work using just one shot of an image while previous algorithms require more than two images acquired under different poses. Also, our proposed algorithm can compute the relative pose of the laser with respect to the camera.
This research was financially supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0009836).
C. H. ChenA. C. Kak, “Modeling and calibration of a structured light scanner for 3D robot vision,” in IEEE Conf. on Robotics and Automation, Raleigh, NC, pp. 807–815 (1987).Google Scholar
F. ZhouG. Zhang, “Complete calibration of a structured light stripe vision sensor thorough planar target of unknown orientations,” Image Vision Comput. 23(1), 59–67 (2005).0262-8856http://dx.doi.org/10.1016/j.imavis.2004.07.006Google Scholar
D. LanmanG. Taubin, “Build your own 3D scanner: 3D photography for beginners,” in SIGGRAPH 2009, Course Notes (2009).Google Scholar
Jong-Eun Ha received his BS and ME degree in mechanical engineering from Seoul National University, Seoul, Korea, in 1992 and 1994, respectively, and the PhD degree in Robotics and Computer Vision Lab. at KAIST in 2000. From February 2000 through August 2002, he worked at Samsung Corning, where he developed a machine vision system. From 2002 to 2005, he worked at Multimedia Engineering in Tongmyong University. Since 2005, he has been assistant professor at the department of automotive engineering in Seoul National University of Science and Technology. His current research interests are intelligent robots/vehicles, object detection and recognition.
Kang-Wook Her received his BS in automotive engineering from Seoul National University of Science and Technology, Seoul, Korea, in 2011. Currently, he is in the MS course at graduate school of NID fusion technology in Seoul National University of Science and Technology. His current research interests are intelligent robot/vehicles.