A squared gray weighted centroid algorithm based on bi-cubic interpolation is proposed to solve the question of the
optical feature precision location in the vision measurement. The location method makes full use of the gray
information of feature image and uses the squared gray as weight to increase the function of pixels with higher gray
value. At the same time, the bi-cubic interpolation algorithm is used to increase the available pixels to improve the
location accuracy. Experimental results show that the location repeatability was less than 0.011 pixels and the
location errors were less than 0.01 pixels. The centroid location method has better location accuracy and could be
used for the precision location of optical feature for the vision measurement.
In order to solve the problem that feature point imaging quality affects the measurement precision of vision system, a method of optical feature point imaging gray value automation control with movement is proposed. The method eliminates the effect of the relative position changes of camera and feature point on imaging gray value by automatic modulating the current in the feature point and camera exposure time. To control the luminance of feature point automatically, a invariable current source which may be controlled by program is constructed using digitally controlled potentiometer, dynatron, operational amplifier and so on. The experiments prove that the system hold nicer linearity, good sensitivity and response, the control method makes it true that the imaging value of feature point always keeps in the ideal range.
A new method for calibrating stereo photogrammetric system is presented. The relative position of the two cameras are determined from epipolar constraint and computed through linear normalization eight-point algorithm and M-estimator method. Calibration is carried out by moving a scale bar, which has six small infrared LED marks and the distances between these marks are used to determine the scale factor and the cross ratio invariant of the two distances between
the three marks on rigid reference bar is used to verify the matching quality. Due to take infrared LED as feature point and the light intensity of these feature points can be automatically controlled according to the distance between cameras and reference bar, the imaging feature points have uniform intensity profile and high contrast with background, and hence the calibration accuracy is improved. The simulations and experimentations have shown that the
calibration accuracy can be compared with complex off-line calibration.
A 3-D coordinate measurement with novel concept and experimental arrangements of large space and multiple viewpoints is proposed. The measurement system includes a stereovision sensor and a separate LCD stripe projection. This measurement method ensures a high number of object points, up to 5x5x5m measurement volume, and rapid data
acquisition. If only one and next partial view have an overlap area, system could automatically transform each partial views measuring data into a same world coordinate system. It can measure a wider area or 360 degree (whole-body) shape by alter the stripe projection's position. Furthermore, it is unnecessary to stick any marker on the
object surface and a subsequent matching of the partial view is not required to obtain a whole-body measurement.
An optical probe imaging based, fully automatic and rather flexible stereo vision system for 3D coordinate on-line measurement is presented and analyzed. In this system, the relative position of the two cameras can easily be calibrated by observing an optical reference bar in different locations and orientations through the measurement according to epipolar constraint and the certified distance of the features on the reference bar. For measurement, the
system takes an optical probe, which has one reference mark and five optical feature points, as imaging target, and makes use of the measurement results of these feature points space coordinates to calculate the measured object point coordinate which contacts with probe tip. To improve the calibration and measurement accuracy, the system takes infrared LED as optical feature point, and optimized signal to noise ratio by automatic LED light intensity, located
these feature imaging points by bilinear centroid sub-pixel algorithm. The effectiveness of the proposed system has been test by experiments.
Proc. SPIE. 4875, Second International Conference on Image and Graphics
KEYWORDS: Detection and tracking algorithms, Light emitting diodes, Cameras, 3D metrology, Signal to noise ratio, Target detection, 3D image processing, 3D acquisition, Algorithm development, Binary data
In the close digital photogrammetric three-dimension coordinates measurement, the circular target is often taken as imaging feature and mounted on the measured object or the probe for 3D coordinates detection. The accuracy with which circular targets are located determines the effectiveness of measurement. Subpixel level accuracy is one of the methods that can improve the accuracy of target location. Many methods which based on subpixel edge or centroid detection have been developed and analyzed for target location, but little research focused on circular optical target location. In this research, a new algorithm named bilinear interpolation centroid algorithm was developed for circular optical target subpixel location. In this technique, the accuracy of the squared gray weighted centroid algorithm can be improved by increasing the available pixels which obtained by bilinear interpolation. The intensity profile of the imaging points and the signal to noise ratio, which affect the subpixel location accuracy, are optimized by automatic exposure control. The experiments have shown that the accuracy of this algorithm is better than the traditional centroid algorithm, and the absolute error of less than 0.0 1 pixels is obtained on the image of a rigid reference bar.