Because of its large field of view (FOV), the fish-eye camera has been widely used in applications such as robot navigation, visual surveillance and three-dimensional (3-D) reconstruction.1 However, fish-eye lenses often introduce a large amount of radial distortion, which causes straight lines in the scene mapping to conics in the image. As a result, a calibration procedure is often required for distortion correction.2
The calibration procedure typically consists of two components, model selection and parameter estimation. In the last decades a number of models have been developed for fish-eye lenses, such as the polynomial model,3 the division model,4 the rational function model,5 and so on. In order to estimate the model parameters, various methods using points, lines and conics have also been proposed in Refs. 6 and 7.
The ideal fish-eye lenses are constructed with the aim of complying with the equidistant projection function.8,9 It has been demonstrated that under this model, parallel lines are projected onto circular arcs and all the circles intersect at two vanishing points, thus the centers of the circles are collinear.10 This property is of great importance because it allows fish-eye camera to be calibrated using single image of two sets of parallel lines in the scene, which is very convenient for practical applications. As a result, fitting of center collinear circles lies at the core of fish-eye camera calibration.
Based on the center colinearity property, Geyer et al.11 proposed a two-step fitting approach. They first fitted the circles separately and then refined the centers of the circles using the colinearity constraint. However, its accuracy is limited due to the fact that the estimated center collinear circles do not necessarily intersect at two points. Recently, Hughes et al.10 proposed an iterative approach to solve the fitting problem. Instead of fitting the circles separately, they utilized the available data points to fit all the circles simultaneously. On each iteration, they fixed one vanishing point and updated the other one, and enforced all the circles to intersect at the two vanishing points. Obviously this approach is more robust against noise thus can greatly improve the accuracy.
In this paper we propose a direct approach for fitting of center collinear circles. First we build a coordinate system for the circles and give the objective function for nonlinear optimization. Then we use Levenberg-Marquardt (LM) algorithm12 to solve the optimization problem. The main advantage of our approach is that two vanishing points are updated simultaneously. Experimental results show that the accuracy of our approach is the same as reported earlier,10 while its speed is much faster.
The remainder of this paper is organized as follows. Section 2 briefly describes the equidistant fish-eye projection. Section 3 presents three fitting approaches, including two existing approaches and the proposed direct approach. Section 4 gives our experimental results on both synthetic and real data, and Sec. 5 offers our conclusion.
Equidistant Fish-Eye Projection
Ideal fish-eye cameras are manufactured to follow the equidistance mapping function such that the distance between a projected point and the optical center of the image is proportional to the incident angle of the projected ray, scaled only by the equidistance parameter , as described by the projection equation:13,1415 Figure 1 illustrates the equidistant projection for a simple model of an equidistant camera system.
Due to the radial distortion introduced by fish-eye lenses, straight lines in the scene will map to conics in the image. It has been proved that under this model, parallel lines are projected onto circular arcs and all the circles intersect at two vanishing points,10 as shown in Fig. 2. It has also been demonstrated that if the parameters of two sets of circles are known, all the intrinsic parameters required for fish-eye camera calibration can be determined.10 As a result, estimating the parameters of the circles lies at the core of equidistant fish-eye camera calibration. In the next section, we will present three fitting approaches for parameters estimation, including two existing approaches and the proposed direct approach.
Fitting of Center Collinear Circles
Fitting of single circle using a set of data points is a well-investigated nonlinear least square problem.16 However, little attention has been paid to multiple circles fitting. In this paper, we will focus on the fitting of center collinear circles, i.e., a set of circles whose centers are collinear, as illustrated in Fig. 3. The most important property of center collinear circles is that all of them intersect at two points. Thus fitting of the circles is always coupled with the estimation of the positions of the two points. In this section, we will first briefly describe two existing methods in the literature, and then introduce the proposed fitting approach.
Geyer et al.11 proposed a two-step approach to estimate the parameters of center collinear circles for catadioptric camera calibration. The approach is briefly described as follows.
1. Separating the data points into several sets of points. Fitting one circle to each set of points.
2. Fitting a line to the centers of all the circles. Updating the centers by projecting them onto this line.
This approach first estimates the parameters of each circle separately, and then enforces the centers of the circles to be collinear. Note that it does not update the radii of the circles, and therefore, it cannot guarantee that all the circles intersect at two points.
Recently, Hughes et al. proposed an iterative approach to alternately update one point by fixing the other.10 This approach consists of the following steps.
1. Separating the data points into several sets of points. Fitting one circle to each set of points.
2. Calculating the intersect points of the circles and determining the initial positions of the two points.
3. Fixing one point and updating the other using LM optimization.
4. Repeating step (3) until convergence is observed or the maximum number of iterations is reached.
It is obvious that when the algorithm terminates, we could obtain the positions of the two points as well as the parameters of the circles. Note that this approach enforces the circles to intersect at two points, thus it is expected to be more accurate than the two-step approach. However, since each time the approach updates only one point, it would take a number of iterations for convergence.
In this subsection, we present a direct approach for fitting of center collinear circles. We first build a coordinate system to facilitate derivations, and then give the equations of the center collinear circles using the vanishing points constraint. Next we propose to solve the fitting problem using LM optimization, and give the objective function as well as its Jacobian. Finally we discuss some issues related to its implementation.
Assume the center collinear circles intersect at two vanishing points and . After translation and rotation , we define the midpoint of line segment as the origin, and the line determined by and as the X-axis, as shown in Fig. 3. In this coordinate system, the center of the circle is , and the coordinates of and are and , respectively. The equation of circle can be expressed as
In order to fit circles, we should estimate parameters, i.e., , , , , .
After translation and rotation, a data point is transformed from to , which satisfies8).
LM algorithm12 is very efficient for the above optimization problem. In order to apply LM algorithm, we should derive the first partial derivative of the objective function, i.e., the Jacobian.17 After some mathematical manipulation, we get the Jacobian
Setting the initial estimation properly is very crucial for LM optimization. Since the fitting error increases with the radius of circle, we propose to first fit each circle separately, and then use the intersect points of two smallest circles as the initial estimation of the vanishing points. The initial values used for LM optimization then can be determined by the two vanishing points and the parameters of the circles. After convergence of LM algorithm, we can compute the parameters of the circles. The complete approach is given as follows:
1. Separating the data points into several sets of points. Fitting one circle to each set of points.
2. Finding two smallest circles and calculating the intersect points of them.
3. Calculating the initial estimation of the parameters , , , , .
4. Updating the parameters using LM optimization until convergence.
5. Calculating the parameters of the circles using the optimization result.
Note that our method is also an iterative approach because LM algorithm is used for parameter estimation and LM algorithm often requires a number of iterations. However, in the iterative approach presented in Sec. 3.2, the vanishing points are estimated sequentially, while in our approach they are estimated simultaneously. Thus our approach is expected to be faster than the iterative approach. In addition, our approach also enforces the circles to intersect at two points, thus it is expected to be more accurate than the two-step approach.
We have evaluated our method on both synthetic and real data and have compared it with those proposed.10,11 We use speed and accuracy as performance indicators for comparison. The platform is a PC with Intel CPU i5-2400 3.10 GHz and 8 G RAM. The software environment is Windows 7 Ultimate and Visual Studio 2005. All the testing programs are written in C++ language.
In order to stimulate the fish-eye camera calibration, we generate eight center collinear circles (denoted as to ), and use only arcs of them for parameter estimation, as shown in Fig. 4. The resolution of the synthetic image is 640 × 480, and the parameters of the circles are listed in Table 1.
Parameters of the synthetic circles.
|(Cx,Cy)||(31.55, 0)||(107.61, 0)||(240, 0)||(600, 0)||(−462,0)||(−194.44,0)||(−79.80,0)||(−10.16,0)|
Note: (Cx,Cy) is the center of the circle. The center of the image (320, 240) is regarded as the origin. r is the radius of the circle.
On each arc we randomly choose 100 points. Gaussian noise with zero-mean and standard deviation is added to the points. The noise levels are 0, 1, 2, 3, 4, 5, respectively. For each noise level, we perform 100 independent trials, and the mean values and standard deviations of these recovered parameters are computed over each run. The errors compared to the ground truth are defined as2Table 3–4. The three fitting approaches described in Sec. 3 are indicated by subscripts , and , respectively.
Fitting results of Cx when σ=3.
Fitting results of Cy when σ=3.
Fitting results of r when σ=3.
From Tables 2Table 3–4, we can see that the fitting results are almost the same for the iterative and the direct approaches, and both of them are much more robust and accurate than the two-step approach. The error and standard derivation of the two-step approach are around 5 to 10 times larger than those of the other two approaches. Thus we can conclude that in terms of fitting accuracy both the iterative and the direct approaches are superior to the two-step approach.
Another observation from Tables 2Table 3–4 is that for all the three approaches, the error and standard derivation increase with the radius of the circle, which is consistent with previous studies.18,19 The reason is that for a larger circle, the arc in the image corresponds to a smaller angle. As a result, the accuracy decreases due to a larger amount of occlusion of the circle. On the other hand, because the iterative and the direct approaches use all the data points for parameters estimation, the adverse effect of noise and occlusion can be greatly reduced by the vanishing points constraint. Hence these two approaches are more robust and accurate than the two-step approach.
The time requirements of the three approaches are reported in Table 5. It can be observed that the two-step approach is the fastest one, which only requires less than 16 ms on average when . The other two approaches are much slower. The time required by the iterative and the direct approaches are about 700 and 30 times more than that of the two-step approach, respectively. It is reasonable because nonlinear optimization is adopted by the direct approach using all the data points and the center collinear constraint, which costs much more time than just estimating the parameters of each circle separately. The iterative approach often requires several tens of iterations for convergence (as listed in Table 6), and there is also a nonlinear optimization step on each iteration. As a result, the iterative approach is the most time-consuming of all. Our method is more than 20 times faster than the iterative approach, and its average running time is less than half a second, which makes it very promising for practical calibration procedures.
The average time (ms) required by the three approaches.
The average iterations required by the iterative approach for convergence.
In this expirment, we take an image using a fish-eye camera with FOV about 170 deg. A chess board is placed in front of the camera for calibration. The size of each block in the chess board is , and the resolution of the image is , as depicted in Fig. 5. Curve extracting is accomplished by a software package developed by our lab. On both horizontal and vertical directions, only six arcs with the largest lengths are selected for parameters estimation. After the fitting of center collinear circles, we can obtain the positions of horizontal vanishing points and and the vertical vanishing points and . If the line joins and is denoted as , and that joins and is denoted as , then the optical center can be determined as the intersect point of and , and the focal lengths on horizontal and vertical directions are given by1) for distortion correction. The details about distortion correction are available.15
Due to lack of the ground truth of parameters of the fish-eye camera, we just check the undistorted image to evaluate the calibration results and the proposed approach. The undistorted image is shown in Fig. 6. As can be seen in this figure, straight lines in the scene now map to straight lines in the image. We then extract the corners of the chess board20 and compute the reconstruction error. The average difference between the position of each corner and their true position is 1.631 pixel. Because the resolution of each block in the undistorted image is , the reconstruction error can be computed as . Thus we conclude that our method is effective for fish-eye camera calibration.
In this paper we propose a novel method for fitting of center collinear circles for fish-eye camera calibration. We formulate the fitting problem as a nonlinear least square problem and solve it using LM optimization. Experimental results on synthetic data show that the proposed method is much more accurate than the two-step approach, and it requires much less time than the iterative approach while keeping the fitting performance unspoiled. Results on real data demonstrate its effectiveness for fish-eye camera calibration. Hence we can conclude that the proposed method is very promising for practical applications.
This work is partially supported by Beijing Postdoctoral Research Foundation (2011ZZ-57), Postdoctoral Science Foundation of China (2011M500013), and Overseas Talents Attracting Plan of BJAST (OTP-2012-012).
X. H. YingZ. Y. HuH. B. Zha, “Fisheye lenses calibration using straight-line spherical perspective projection constraint,” in Proc. 8th Asian Conf. Comput. Vision (ACCV), pp. 61–70, Springer, Berlin, Germany (2006).Google Scholar
C. BurchardtK. Voss, “A new algorithm to correct fish-eye and strong wide-angle-lens-distortion from single images,” in Proc. Int. Conf. Image Process. (ICIP), pp. 225–228, IEEE CS Press, Los Alamitos, California (2001).Google Scholar
S. ShahJ. K. Aggarwal, “Intrinsic parameter calibration procedure for a (high-distortion) fish-eye lens camera with distortion model and accuracy estimation,” Pattern Recognit. 29(11), 1775–1788 (1996).PTNRA80031-3203http://dx.doi.org/10.1016/0031-3203(96)00038-6Google Scholar
A. W. Fitzgibbon, “Simultaneous linear estimation of multiple view geometry and lens distortion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 125–132, IEEE CS Press, Los Alamitos, California (2001).Google Scholar
D. ClausA. W. Fitzgibbon, “A rational function lens distortion model for general cameras,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 213–219, IEEE CS Press, Los Alamitos, California (2005).Google Scholar
J. KannalaS. Brandt, “A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses,” IEEE Trans. Pattern Anal. Mach. Intell. 28(8), 1335–1340 (2006).ITPIDJ0162-8828http://dx.doi.org/10.1109/TPAMI.2006.153Google Scholar
J. P. BarretoH. Araujo, “Fitting conics to paracatadioptric projections of lines,” Comput. Vis. Image. Und. 101(3), 151–165 (2006).CVIUF41077-3142http://dx.doi.org/10.1016/j.cviu.2005.07.002Google Scholar
C. Hugheset al., “Equidistant () fish-eye perspective with application in distortion centre estimation,” Image Vis. Comput. 28(3), 538–551 (2010).IVCODK0262-8856http://dx.doi.org/10.1016/j.imavis.2009.09.001Google Scholar
C. GeyerK. Daniilidis, “Catadioptric camera calibration,” in Proc. 7th Int. Conf. Comput. Vis. (ICCV), pp. 398–404, IEEE CS Press, Los Alamitos, CA (1999).Google Scholar
C. C. Slama, Manual of Photogrammetry, American Society for Photogrammetry and Remote Sensing, Bethesda, Maryland (1980).Google Scholar
C. Hugheset al., “Equidistant fish-eye calibration and rectification by vanishing point extraction,” IEEE Trans. Pattern Anal. Mach. Intell. 32(12), 2289–2296 (2010).ITPIDJ0162-8828http://dx.doi.org/10.1109/TPAMI.2010.159Google Scholar
Z. Y. Zhang, “Parameter estimation techniques: a tutorial with application to conic fitting,” Image Vis. Comput. 15(1), 59–76 (1997).IVCODK0262-8856http://dx.doi.org/10.1016/S0262-8856(96)01112-2Google Scholar
K. MadsenH. B. NielsenO. Tingleff, Methods for Non-Linear Least Squares Problems, 2nd ed., Technical University of Denmark (2004).Google Scholar
X. YingZ. Hu, “Catadioptric camera calibration using geometric invariants,” IEEE Trans. Pattern Anal. Mach. Intell. 26(10), 1260–1271 (2004).ITPIDJ0162-8828http://dx.doi.org/10.1109/TPAMI.2004.79Google Scholar
A. FitzgibbonR. Fisher, “A buyer’s guide to conic fitting,” in Proc. 6th British Machine Vision Conf., pp. 513–522, BMVA Press, Manchester, United Kingdom (1995).Google Scholar
C. HarrisM. Stephens, “A combined corner and edge detector,” in Alvey Vision Conference, pp. 147–151, BMVA Press, Manchester, United Kingdom (1988).Google Scholar
Feng Yue received his BSc and PhD degrees in computer science from the Harbin Institute of Technology (HIT), in 2003 and 2010, respectively. He is now a postdoctoral researcher in the Pattern Recognition Research Center, Beijing Institute of New Technology Applications. His current research interests include computer vision and pattern recognition.
Bin Li received his MSc and PhD degrees in computer science from Harbin Institute of Technology (HIT), Harbin, China, in 2000 and 2006, respectively. From 2006 to 2008, he worked in School of Computer Science and Technology, HIT, as a lecturer. From 2008 to now, he has been working in Key Laboratory of Pattern Recognition of Beijing Academy of Science and Technology as a director, and in Beijing Institute of New Technology Applications as an associate research fellow and deputy director. So far, he has published over 15 papers and two books. His research interests include signal processing, pattern recognition and biometrics, etc.
Ming Yu received his BS degree from Beijing University of Post and Telecommunications (BUPT), Beijing, China in 1986, the MS degree from Hebei University of Technology (HUT), Tianjin, China in 1989, and his PhD degree in communication and information systems from Beijing Institute of Technology (BIT), Beijing, China in 1999, respectively. Since 1989, he has been with the Department of Computer Science and Engineering, Hebei University of Technology, where he is currently a professor. His current research interests include image/video understanding, intelligent media processing and pattern recognition.