To some extent, the mapping relationship between the camera and projector images determines the precision of surface
reconstruction in digital close-range photogrammetry. In this paper, a new method is presented to achieve sub-pixel-level
mapping between the camera and projector images. Instead of mapping the stripe from the camera to the projector, which
is pixel-precision-based, a set of pixels that share the same decoded number were picked out on the camera images and
their barycenter was calculated to mapped onto the pixel on the projector images. In most cases, the calculation of the
barycenter is able to achieve sub-pixel precision. Compared with existing approaches based on the direct mapping of the
stripe on the camera image to the projector image, the proposed method is characterized by higher accuracy in mapping
the points and thus the surface reconstruction performance. The experimental results are presented to show the
effectiveness of the proposed method in the improvement of the accuracy of shape reconstruction.
To simultaneously perform 3D measurement and camera attitude estimation, an efficient and robust method based on trifocal tensor is proposed in this paper, which only employs the intrinsic parameters and positions of three cameras. The initial trifocal tensor is obtained by using heteroscedastic errors-in-variables (HEIV) estimator and the initial relative poses of the three cameras is acquired by decomposing the tensor. Further the initial attitude of the cameras is obtained with knowledge of the three cameras’ positions. Then the camera attitude and the interested points’ image positions are optimized according to the constraint of trifocal tensor with the HEIV method. Finally the spatial positions of the points are obtained by using intersection measurement method. Both simulation and real image experiment results suggest that the proposed method achieves the same precision of the Bundle Adjustment (BA) method but be more efficient.
In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.
This paper designs a multiple reflectors based autocollimator, and proposes a direct linear solution for three-dimensional (3D) angle measurement with the observation vectors of the reflected lights from the reflectors. In the measuring apparatus, the multiple reflectors is fixed with the object to be measured and the reflected lights are received by a CCD camera, then the light spots in the image are extracted to obtain the vectors of the reflected lights in space. Any rotation of the object will induce a change in the observation vectors of the reflected lights, which is used to solve the rotation matrix of the object by finding a linear solution of Wahba problem with the quaternion method, and then the 3D angle is obtained by decomposing the rotation matrix. This measuring apparatus can be implemented easily as the light path is simple, and the computation of 3D angle with observation vectors is efficient as there is no need to iterate. The proposed 3D angle measurement method is verified by a set of simulation experiments.
Grid algorithm is a classical star identification algorithm based on star pattern. A three-dimensional grid algorithm for
all-sky autonomous star identification is proposed, which is associated with the information of star view magnitude. In
contrast with traditional grid algorithm that constructs the grid cells on two-dimensional plane (e.g. x-y coordinate plane),
the proposed approach makes use of the star view magnitudes of the neighboring stars as the third dimension (e.g. z-axis).
A pattern is generated for each of its three-dimensional grid cells that contain neighboring star are 1, and those without
are 0. The progress of star identification is to determine which pattern in the database is associated with the particular
sensor pattern. Simulation shows that this method can achieve identification rate of 98.0% while the standard deviation
of star position error and star view magnitudes are 1 pixel and 0.3Mv respectively. Compared with the traditional grid
algorithm, the identification rate is higher, and the average runtime is 50 percent shorter.
We have developed a calibration approach for a star tracker camera. A modified version of the least-squares iteration algorithm combining Kalman filter is put forward, which allows for autonomous on-orbit calibration of the star tracker camera even with nonlinear camera distortions. In the calibration approach, the optimal principal point and focal length are achieved at first via the modified algorithm, and then the high-order focal-plane distortions are estimated using the solution of the first step. To validate this proposed calibration approach, the real star catalog and synthetic attitude data are adopted to test its performance. The test results have demonstrated the proposed approach performs well in terms of accuracy, robustness, and performance. It can satisfy the autonomous on-orbit calibration of the star tracker camera.
In order to provide the beneficial expression of lens distortion for star trackers, the numerical results of the grid
distortions of four typical star tracker lens systems have been investigated, and then data fitting is done with combined
multinomial of different number of terms and powers of radial radius. The results indicate that the expression of relative
distortion including terms of the one, two, three and four powers of the radial distortion is beneficial in terms of
accuracy, and the fitting function with high-order is not quite helpful in providing a more accurate expansion. In order to
further validate this distortion model, a star tracker camera calibration approach has been simulated with the data
obtained by ray tracing via the Non-Sequential Components of ZEMAX, with imperfect alignment and assembly taken
into account. Simulation results also indicate that the four-order of the radial distortion is beneficial.
An autonomous star tracker is an opto-electronic instrument used to provide the absolute three-axis attitude of a spacecraft utilizing star observations. The precise calibration of the measurement model is crucial, as the performance of the star tracker is highly dependent on the star camera parameters. We focus on proposing a simple and available calibration approach for a star tracker with wide field of view. The star tracker measurement model is described, and a novel approach for laboratory calibration is put forward. This approach is based on a collimator, a two-dimensional adjustable plane mirror, and other ordinary instruments. The calibration procedure consists of two steps: (1) the principal point is estimated using autocollimation adjustment; and (2) the other camera parameters, mainly the principal distance and distortions, are estimated via least-squares iteration, taking into account the extrinsic parameters. To validate this proposed calibration method, simulations with synthetic data are used to quantify its performance considering the errors of the distortion model and calibration data. The theoretical analysis and simulation results indicate that the uncertainties of the measured star direction vectors are less than 4.0×10−5 rad after calibration, and this can be further improved.
Due to the residual chromatic aberration of lens in star tracker, the position accuracy of the star image decrease with the
increase of the field of view (FOV). The spectral distribution characteristics of guide star catalog including about 4600
stars are analyzed statistically, and the function model of stellar spectrums is established in this paper. The centroid
position for each of the guide star images is a function of its color type and the radial distance to the center of the FOV.
The principle of calibration of the centroid error is to make the weighted polynomial, and use a least square fitting
approach to obtain the best values of the position errors compensatory parameters for star image considered in a wide
field of view (FOV) and with different color temperature. As an example, at the 2.5 DEGREES (FOV) star position
errors for Spectral types F, G and K are 10.80μm, 6.5174μm and 4.3479μm respectively. The star position RMS error is
reduced from 1.06 pixels to 0.13 pixels, after implementing the spectral compensation scheme for the lens system of a