A systematic videometrics method of cooperative object pose-measurement for RVD (rendezvous and docking) is proposed in the paper. According to the method, initial values of pose parameters are calculated from binocular images respectively, and then optimized with bundle adjustment. While a certain variation of some exterior parameters of one camera are added as systematic disturbance purposely, the correct result could be calculated theoretically as the method analysis, and then also is verified in our experiment. The correct result could be converged quickly and stably in the experiment, and accurate pose-measurement results also could be obtained while initial values are provided with some certain errors. Even some amount of disturbance has been added purposely in experiment, high-precision pose results are also obtained by the binocular and bundle adjustment way.
In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.
The automatic detection of visually salient information from abundant video imagery is crucial, as it plays an important role in surveillance and reconnaissance tasks for Unmanned Aerial Vehicle (UAV). A real-time approach for the detection of salient objects on road, e.g. stationary and moving vehicle or people, is proposed, which is based on region segmentation and saliency detection within related domains. Generally, the traditional method specifically depends upon additional scene information and auxiliary thermal or IR sensing for secondary confirmation. However, this proposed approach can detect the interesting objects directly from video imagery captured by optical camera fixed on the small level UAV platform. To validate this proposed salient object detection approach, the 25 Hz video data from our low speed small UAV are tested. The results have demonstrated the proposed approach performs excellently in isolated rural environments.
The high portability of small Unmanned Aircraft Vehicles (UAVs) makes them play an important
role in surveillance and reconnaissance tasks, so the military and civilian desires for UAVs are
constantly growing. Recently, we have developed a real-time video exploitation system for our small
UAV which is mainly used in forest patrol tasks. Our system consists of six key models, including
image contrast enhancement, video stabilization, mosaicing, salient target indication, moving target
indication, and display of the footprint and flight path on map. Extensive testing on the system has
been implemented and the result shows our system performed well.