This paper proposes a method of self-calibration of monocular vision system which is based on planar points. Using the
method proposed in this paper we can get the initial value of the three-dimensional (3D) coordinates of the feature points
in the scene easily, although there is a nonzero factor between the initial value and the real value of the 3D coordinates of
the feature points. From different viewpoints, we can shoot different pictures, and calculate the initial external
parameters of these pictures. Finally, through the overall optimization, we can get all the parameters including the
internal parameters, the distortion parameters, the external parameters of each picture and the 3D coordinates of the
feature points. According to the experimental results, in about 100mm×200mm field of view, the mean error and the
variance of 3D coordinates of the feature points is less than 10μm.
Nowadays, computer vision has been wildly used in our daily life. In order to get some reliable information, camera
calibration can not be neglected. Traditional camera calibration cannot be used in reality due to the fact that
we cannot find the accurate coordinate information of the referenced control points. In this article, we present a
camera calibration algorithm which can determine the intrinsic parameters both with the extrinsic parameters.
The algorithm is based on the parallel lines in photos which can be commonly find in the real life photos. That
is we can first get the intrinsic parameters as well as the extrinsic parameters though the information picked
from the photos we take from the normal life. More detail, we use two pairs of the parallel lines to compute
the vanishing points, specially if these parallel lines are perpendicular, which means these two vanishing points
are conjugate with each other, we can use some views (at least 5 views) to determine the image of the absolute
conic(IAC). Then, we can easily get the intrinsic parameters by doing cholesky factorization on the matrix of
IAC.As we all know, when connect the vanishing point with the camera optical center, we can get a line which is
parallel with the original lines in the scene plane. According to this, we can get the extrinsic parameters R and
T. Both the simulation and the experiment results meets our expectations.
To determine the relative pose between an object and a single camera using correspondences between 3D feature points of the object and their corresponding 2D projections in the image, this paper proposes a target’s pose measurement algorithm based on the bundle adjustment method. This iterative algorithm can be divided into two steps: first, a reliable initial pose is computed by using only three non-collinear points; second, the optimal rotation and translation matrix of the target is estimated based on the bundle adjustment method. Experiments on simulated data and real data show that this method can get high precision with its rotation angles error within 20 arc-second and the translation error within 20 micron in ideal occasions.
With the camera internal parameters known, to calculate the external parameters is to solve a set of highly nonlinear over-determined equations. In this paper, an improved hybrid genetic algorithm is adopted to obtain external parameters. It combines the advantages of genetic algorithm and Newton method, making it possible to obtain results with high accuracy and a faster convergence.
The paper proposed a large field of view (FOV) measurement with calibrating the camera and measuring simultaneously.
In the measurement, the whole FOV was divided into several smaller ones with overlapping areas between each other.
The overlapping areas should contain at least 4 noncollinear feature points in each for computing external parameters
and at least 4 noncollinear control points in one of them to start the calculation. To obtain the measurement of the whole
large FOV, 2 images (or more) of each small fields of view needed to be taken from different angles. In the process of
calculation, theoretical values of the camera were used as the initial values of the internal parameters and the initial
values of external values were obtained from a new solution for P4P problem. So, the internal parameters of the camera,
the external parameters for each image, and the 3D coordinates of the feature points in the large field of view could be
acquired by adjustment method. In our experiment, the large field of view range was 500mm×500mm, the smaller ones
corresponding to each image was 200mm×200mm, and the ultimate measurement accuracy was 12μm.