## 1.

## Introduction

Stereoscopic system is considered a classic system in photography measurement. Accurate calibration of cameras is especially crucial, because it plays an important role in the measurement accuracy of a stereoscopic system. General calibration of a stereoscopic system consists of estimating the internal and external parameters. The internal parameters determine the image coordinates of the measured points in a scene with respect to the camera coordinate frame, and the external parameters represent the geometrical relationship between the camera and the scene or between the different cameras.

The existing techniques for camera calibration can be classified into two categories: parametric methods and general nonparametric methods.^{1}

Parametric calibration methods are standard approaches to discover the relation of the three-dimensional (3-D) Euclidean world to the two-dimensional image space. However, for each different sensor type, a different parametric representation is required. In photography measurement, pinhole cameras are often used, so we mainly review the works that have been done for regular pinhole cameras.

Considering all the parameters simultaneously, a direct nonlinear minimization is a good choice by using an iterative algorithm with the objective of minimizing residual errors of some equations.^{2}^{,}^{3} The great disadvantage is that the nonlinear processing may end in a local solution with different types of parameters included in one space. Some existing linear methods solve linear equations established by some constraints with the goal of computing a set of intermediate parameters.^{4}^{,}^{5} But in most cases, the lens distortions are not considered so the accuracy of the final solution is relatively low. Other parametric methods may compute some parameters first and then followed by others. Tsai^{6} derived a closed form solution for external parameters and the focal length and then used an iterative scheme to estimate other parameters. Straight lines in space were used as constraints in order to find the right parameters of the distortion model in Refs. 78.–9. In Ref. 10, geometrical and epipolar constraints were imposed in a nonlinear minimization problem to correct points’ location in images first and then the lens distortion and fundamental matrix were estimated separately.

With the parameterized model to handle the lens distortion, we find a serious discrepancy that the results obtained with calibration data are better than the ones with testing data.^{11} This discrepancy can be explained by the inadequacy of the parameterized distortion model.

An alternative idea of nonparametric camera calibration was introduced by Grossberg and Nayar,^{12} who used a set of virtual sensing elements called raxels to describe a mapping from incoming scene rays to photosensitive elements on the image detector, and a more general approach was developed in Refs. 1314.–15. In this generic method, several planar calibration objects are used to determine the corresponding optical ray of each pixel, and it is powerful as it can be applied equally well to any arbitrary imaging systems. But for close-range photogrammetry, the calibration from pixel to pixel does not realize high-accuracy measurements.

We have designed a pure optical distortion correction method for calibrating perspective imaging systems. The proposed method inherits the idea of nonparametric modeling and uses precise rotating platform and subpixel image processing to realize the mapping between incoming scene rays and photosensitive elements on the image detector (every photosensitive element could be divided into many parts to improve the accuracy if necessary). Then, we applied it to the calibration of a stereo vision system. In contrast to the standard parametric approach, it decouples the distortion estimation from the calibration of external parameters of two cameras at the same time, thus avoiding any error compensation between each other.^{16}^{,}^{17}

## 2.

## Proposed Camera Correction Process

In the pin-hole camera model, the object point in scene and its image point obey a certain geometrical constraint: object point $P$, image point $p$, and the optical center of camera $O$ lie in a line.

Because of the distortion in real circumstances, the optical lines are refracted when passing through the optical center. Then, we obtain the distorted image points that have deviations from the ideal ones. The parametric methods try to establish mathematical models in order to relate the distorted image plane and the undistorted one correctly. But, apparently, it is difficult since the distortion caused by the lenses shows both regularity and irregularity.

So, we proposed a pure optical correction method to record the rays entering into the lens, as many as possible. Every ray will have an image when entering into lenses, and the direct method is to relate the coordinate of the image point and the angle determining the incident ray.

First, we establish a Cartesian coordinate system as shown in Fig. 1. Take the optical center as origin $O$, optical axis as $Z$-axis, and make $X$-axis and $Y$-axis parallel to the vertical and horizontal axes of image plane, respectively. Then, rotate the camera around $X$-axis and take a photograph of a fixed straight line in scene at every certain interval. At every angle ${\alpha}_{i}$ that determines the plane ${\pi}_{i}$, we can get the distorted image ${l}_{i}$ of the straight line ${L}_{i}$ on the image detector. When all of the angles in the field of view are recorded, a database about one-to-one correspondence between the angle ${\alpha}_{i}$ and the curved line ${l}_{i}$ has been already established. Then, rotating the camera for 90 deg around $Z$-axis, with the same method, we can get another angle ${\beta}_{i}$.

Since, in practice, not all the incident rays could be recorded, the image plane is divided into many square grids. If the interval of the rotating angle is set small enough, high-accuracy results can be obtained. Given an arbitrary point $P$ in measuring field, as shown in Fig. 1, its correspondence on the image plane will lie in a small grid. We can get the fitted angle by bilinear interpolation^{18} as

## (1)

$${\alpha}_{(u,v)}=({\alpha}_{i+1}-{\alpha}_{i})\times \frac{u-{u}_{i}}{{u}_{i+1}-{u}_{i}}\phantom{\rule{0ex}{0ex}}{\beta}_{(u,v)}=({\beta}_{i+1}-{\beta}_{i})\times \frac{v-{v}_{i}}{{v}_{i+1}-{v}_{i}}.$$So with any measured point’s image coordinate $(u,v)$, we can calculate its corresponding angle $(\alpha ,\beta )$. If the image point just lies on the curved line, we can directly get the corresponding angle through searching the database.

## 3.

## Calibration of a Stereoscopic System

When the above-mentioned procedures are completed, the stereoscopic system can be easily calibrated by placing a fixed-length reference one-dimensional (1-D) target arbitrarily in the field of view, which is commonly used.^{3}^{,}^{19} As shown in Fig. 2, two feature points are fixed on the ends of reference target with the distance exactly known in advance.

The external parameters of a stereoscopic system include rotation matrix $R$ and translation vector $T$, which can be represented with the essential matrix $E$. Let $F$ be the fundamental matrix of a stereoscopic system, then we have

whereThe fundamental matrix can be computed with the eight-point algorithm proposed in Refs. 4 and 20. At least seven pairs of corresponding virtual image points and the distance $L$ are needed to obtain $R$, $T$, and the 3-D coordinates of the reconstructed feature points, which are used as the initial values of the following nonlinear minimization.

Then, we can establish the minimization function to obtain optimal values of external parameters with the fixed length of 1-D target and geometrical constraints.

As shown in Fig. 3, ${{}^{i}P}_{1}$, and ${{}^{i}P}_{2}$ represent two feature points located in the 1-D target, where $i$ is the number of positions that the 1-D target has been placed in. The plane, that ${{}^{i}P}_{1}$ should lie on, is ${{}^{i}\pi}_{1}^{j}$, where $j$ is the number of the plane since a feature point is the intersection point of four planes; $d(,)$ represents the distance between a point and a plane. Then, we have the error equation as follows:

## (3)

$${e}_{1}(X,P)=\sum _{i=1}^{t}\sum _{j=1}^{4}[d({P}_{1}^{i},{{}^{i}\pi}_{1}^{j})+d({P}_{2}^{i},{{}^{i}\pi}_{2}^{j})].$$Denote $L$ and $D(,)$ as the true and measured distance between the two feature points of the 1-D target, respectively, then we have

where $X={[{r}_{x},{r}_{y},{r}_{z},{t}_{x},{t}_{y},{t}_{z}]}^{T}$; ${[\begin{array}{ccc}{r}_{x}& {r}_{y}& {r}_{z}\end{array}]}^{T}$ is the vector form of the rotation matrix $R$.With Eqs. (3) and (4), the minimization equation is established as follows:

The nonlinear minimization has the angles to feature points from the optical center and the real length of 1-D target as inputs. External parameters $X$ and feature points $P$ are corrected to minimize Eq. (5). The algorithm is a Levenberg–Marquardt nonlinear minimization that starts with the initial values ${X}_{0}$ and ${P}_{0}$ and ends with the optimized solution of the external parameters.

## 4.

## Experiments

## 4.1.

### Optical Correction for the Camera

A pure optical method requires a high-accuracy straight line in scene which can be captured by the camera. Note that the subpixel detection of image dots gives more reliable results than cross detection.^{21} A photoreflector material seems like a good choice, but it is liable to be affected by lighting variation, and an extra light source is often needed to get high contrast ratio. Since the repetitive positioning accuracy of the light-emitting diode’s (LED) image is better than 0.02 pixels,^{22}23.^{–}^{24} a set of near-infrared LEDs were used, which were adjusted into a straight line by a high-accuracy linear guide.

The rotation of the camera around an axis was completed by the multitooth dividing table. The determination of the optical center that must be coincident with the rotating axis of multitooth-dividing table is critical to the overall accuracy. The determination of the rotating axis was completed by a six-dimensional (6-D) high-accuracy adjustable platform, a physical axis, and a dial gauge, as shown in Fig. 4. The multitooth-dividing table can rotate 360 deg. We adjusted the 6-D high-accuracy adjustable platform to keep numerical values measured by the dial gauge almost invariant when the multitooth-dividing table was rotating.

We used collimated semiconductor laser beams shaped by an aperture stop as the narrow parallel beams, and in space we need at least two beams to determine a point. Here, we used three beams as shown in Fig. 5.

The process of the alignment of the optical center with the rotating axis was

1. As shown in Fig. 4, make the rotating axis coincide with the physical axis to visualize the rotating axis;

2. Adjust three beams to meet at a point right on the physical axis;

3. Replace the physical axis with a camera and adjust the 6-D high-accuracy adjustable platform to make the three beams pass through the optical center, as shown in Fig. 5;

4. Use a multitooth-dividing table to control the rotation angles of the camera precisely. For the multitooth-dividing table, the minimum rotation interval is 1 arc sec and rotation accuracy is 0.6 arc sec.

To test the performance of the pure optical method, a corrected camera was placed at several locations to capture multiple images of the straight line which was composed of a set of feature points before the camera. With the image coordinates of the feature points, we could obtain the corresponding horizontal and vertical angles through the proposed method, and all the feature points were reprojected on a virtual image plane, which was perpendicular to the optical axis. Then, their regression line was computed and the root mean square (RMS) distance from each feature point to the line was used as the measure error.

Figure 6 denotes the orientation of the lines on the virtual image plane corrected by our method and Table 1 shows the RMS errors of the corrected and uncorrected lines, from which we can see that the RMS errors (in pixels) of the feature points to their regression lines were improved with our method.

## Table 1

The root mean square error (RMSE, in pixels) of the feature points to their regression lines.

RMSE (in pixels) | ||
---|---|---|

Corrected by our method | Uncorrected | |

Line 1 | 0.0535 | 0.0666 |

Line 2 | 0.1007 | 0.1062 |

Line 3 | 0.0644 | 0.1637 |

Line 4 | 0.1033 | 0.1763 |

Line 5 | 0.1099 | 0.2034 |

Line 6 | 0.0771 | 0.2371 |

## 4.2.

### Spatial Measurement by the Stereoscopic System

Two FL2G-50S5M cameras with the resolution of $2448\times \phantom{\rule{0ex}{0ex}}2048\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$, equipped with 23-mm lens, were used to set up a stereoscopic system. Its working distance was about 8000 mm and the range of measurement was $4000\times \phantom{\rule{0ex}{0ex}}5000\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$, and the baseline between the two cameras was about 7000 mm. The 1-D target had two feature points with a 1026.150-mm interval, and the 1-D target could be located at any orientation from different viewpoints in the field of view.

After the optimization with external parameters $X$ and feature points $P$ mentioned in Sec. 3, we got $M$, the final result of external parameters, and the errors between the real distance of the two feature points on the target and the measured one listed in Table 1.

The 1-D target was randomly placed another 10 times at different positions, maybe in the fringe field of view, and the measured distance between two endpoints on the target was used to evaluate the measuring accuracy of the stereoscopic system. From the calibration and measurement data listed in Tables 2 and 3, we can see that the RMS errors are in the same order of magnitude, and both of them are $<0.1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$.

## Table 2

Calibration results.

Measured distance of 1-D target (mm) | Error (mm) | ||
---|---|---|---|

Calibration data | 1 | 1026.15247 | 0.002467 |

2 | 1026.19204 | 0.042043 | |

3 | 1025.95986 | −0.19014 | |

4 | 1026.11138 | −0.038616 | |

5 | 1026.14653 | −0.003472 | |

6 | 1026.08118 | −0.068824 | |

7 | 1026.17616 | 0.026162 | |

8 | 1026.17984 | 0.029838 | |

RMS error (mm) | 0.075616 |

## Table 3

Measurement results.

Measured distance of 1-D target (mm) | Error (mm) | ||
---|---|---|---|

Testing data | 1 | 1026.02534 | −0.124656 |

2 | 1026.15864 | 0.008642 | |

3 | 1026.10046 | −0.049544 | |

4 | 1026.07426 | −0.075744 | |

5 | 1026.0481 | −0.101904 | |

6 | 1026.15457 | 0.004569 | |

7 | 1026.19626 | 0.046261 | |

8 | 1026.22503 | 0.075026 | |

RMS error (mm) | 0.072439 |

## 5.

## Conclusion

A novel camera calibration method based on nonparametric models has been defined. First, a database has been obtained to remove influences of the distortion caused by lenses by a pure optical adjustment method. Second, a stereoscopic system has been established to test the performance of the proposed method, and the external parameters of cameras can be accurately acquired with the 1-D target. This method gets rid of the constraints of camera distortion models, and it is applicable to a central camera equipped with any lenses. Also, the coupling among all the intrinsic and external parameters is avoided, which may otherwise lead to instability and compensation of each other. On the other hand, as the subdivision number of angles increases, the correction time increases too. However, since camera correction is an off-line process, more time consumed for higher accuracy is acceptable.

## Acknowledgments

This research was supported by the National Natural Science Funds for Distinguished Young Scholars of China (Grant No. 51225505) and the National High-technology & Development Program of China (863 Program, Grant No. 2012AA041205). The authors would like to express their sincere thanks to them and comments from the reviewers and the editor are very much appreciated.

## References

A. K. DunneJ. MallonP. F. Whelan, “A comparison of new generic camera calibration with the standard parametric approach,” in MVA2007—IAPR Conference on Machine Vision Applications, Tokyo, Japan (16–18 May 2007).Google Scholar

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).ITPIDJ0162-8828http://dx.doi.org/10.1109/34.888718Google Scholar

J. Sunet al., “A calibration method for stereo vision sensor with large FOV based on 1D targets,” Opt. Lasers Eng. 49(11), 1245–1250 (2011).OLENDN0143-8166http://dx.doi.org/10.1016/j.optlaseng.2011.06.011Google Scholar

H. C. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Nature 293, 133–135 (1981).NATUAS0028-0836http://dx.doi.org/10.1038/293133a0Google Scholar

X. ArmanguéJ. Salvi, “Overall view regarding fundamental matrix estimation,” Image Vis. Comput. 21(2), 205–220 (2003).IVCODK0262-8856http://dx.doi.org/10.1016/S0262-8856(02)00154-3Google Scholar

R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Rob. Autom. 3(4), 323–344 (1987).IJRAE40882-4967http://dx.doi.org/10.1109/JRA.1987.1087109Google Scholar

B. PrescottG. F. McLean, “Line-based correction of radial lens distortion,” Graph. Models Image Process. 59(1), 39–47 (1997).CGMPE51049-9652http://dx.doi.org/10.1006/gmip.1996.0407Google Scholar

T. PajdlaT. Werner, “Correcting radial lens distortion without knowledge of 3-D structure,” Technical Report TR97-138, Faculty of Electrical Engineering, Czech Technical University, Praha, Czech Republic (1997).Google Scholar

F. DevernayO. Faugeras, “Straight lines have to be straight,” Mach. Vis. Appl. 13(1), 14–24 (2001).MVAPEO0932-8092http://dx.doi.org/10.1007/PL00013269Google Scholar

C. Ricolfe-VialaA. J. Sanchez-SalmeronE. Martinez-Berti, “Calibration of a wide angle stereoscopic system,” Opt. Lett. 36(16), 3064–3066 (2011).OPLEDP0146-9592http://dx.doi.org/10.1364/OL.36.003064Google Scholar

C. Ricolfe-VialaA. J. Sanchez-SalmeronE. Martinez-Berti, “Accurate calibration with highly distorted images,” Appl. Opt. 51(1), 89–101 (2012).APOPAI0003-6935http://dx.doi.org/10.1364/AO.51.000089Google Scholar

M. D. GrossbergS. K. Nayar, “A general imaging model and a method for finding its parameters,” in 8th Int. Conf. Proc. Computer Vision, ICCV 2001, Vol. 2, pp. 108–115, IEEE, Piscataway, New Jersey (2001).Google Scholar

P. SturmS. Ramalingam, “A generic calibration concept: theory and algorithms,” Research Report 5058, INRIA, France (2003).Google Scholar

P. SturmS. Ramalingam, “A generic concept for camera calibration,” in Computer Vision-ECCV 2004, pp. 1–13, Springer, Berlin, Heidelberg (2004).Google Scholar

S. RamalingamP. SturmS. K. Lodha, “Towards complete generic camera calibration,” in IEEE Computer Society Conf. Computer Vision and Pattern Recognition, CVPR 2005, Vol. 1, pp. 1093–1098, IEEE, Piscataway, New Jersey (2005).Google Scholar

J. WengP. CohenM. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).ITPIDJ0162-8828http://dx.doi.org/10.1109/34.159901Google Scholar

T. A. ClarkeX. WangJ. G. Fryer, “The principal point and CCD cameras,” Photogramm. Rec. 16(92), 293–312 (1998).PGREAY0031-868Xhttp://dx.doi.org/10.1111/phor.1998.16.issue-92Google Scholar

T. M. LehmannC. GonnerK. Spitzer, “Survey: interpolation methods in medical image processing,” IEEE Trans. Med. Imag. 18(11), 1049–1075 (1991).ITMID40278-0062http://dx.doi.org/10.1109/42.816070Google Scholar

H. LeiZ. WeiG. Zhang, “A simple global calibration method based on 1D target for multi-binocular vision sensor,” in Int. Symp. Computer Science and Computational Technology, ISCSCT’08, Vol. 2, pp. 290–294, IEEE, Piscataway, New Jersey (2008).Google Scholar

R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. 19(6), 580–593 (1997).ITPIDJ0162-8828http://dx.doi.org/10.1109/34.601246Google Scholar

J. M. LavestM. VialaM. Dhome, “Do we really need an accurate calibration pattern to achieve a reliable camera calibration?,” in Computer Vision—ECCV’98, pp. 158–174, Springer, Berlin, Heidelberg (1998).Google Scholar

Z. Jian, Study on the Precision Amelioration of Optical Coordinate Measuring, Tianjin University, Tianjin (2009).Google Scholar

Z. Guangjun, Machine Vision, Science Press, Beijing (2008).Google Scholar

J. AresJ. Arines, “Influence of thresholding on centroid statistics: full analytical description,” Appl. Opt. 43(31), 5796–5805 (2004).APOPAI0003-6935http://dx.doi.org/10.1364/AO.43.005796Google Scholar

## Biography

**Wei Wang** is a PhD candidate in precision measuring technology and instruments in University of Tianjin, and he received his MS degree in precision measuring technology and instruments from the University of Tianjin in 2011. His research interests are in photoelectric precision measuring and photography measurement.

**Ji-Gui Zhu** received his BS and MS degrees from the National University of Defense Science and Technology of China in 1991 and 1994, and his PhD degree in 1997 from Tianjin University, China. He is now a professor at the State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University. His research interests are focused on laser and photoelectric measuring technology, such as industrial online measurement, and large-scale precision metrology.