19 February 2018 Monocular visual odometry-based 3D-2D motion estimation
Author Affiliations +
Proceedings Volume 10608, MIPPR 2017: Automatic Target Recognition and Navigation; 106080K (2018) https://doi.org/10.1117/12.2286251
Event: Tenth International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR2017), 2017, Xiangyang, China
Abstract
In this paper, two main algorithms about monocular visual odometry is introduced based on 3D-2D motion estimation. A 3D-2D motion estimation method needs to maintain a consistent and accurate set of triangulated 3D features and to create 3D-2D feature matches. Therefore, a keyframe selection strategy is proposed to construct the precise 3D point sets. Based on this strategy, an algorithm is designed to get more proper keyframes by restricting the number of feature points and taking translation amount into account. This keyframe selection strategy will discard inferior frames and construct more precise 3D point sets. We also designed a method to filter 3D-2D feature matches in two different ways. This method contributes to estimating camera pose more accurately. The effectiveness and feasibility of the proposed algorithms were verified in both KITTI outdoor dataset and a real indoor environment. The result of experiment showed that our algorithms can recover the motion trajectory of the camera accurately. And it meet the requirements of real-time and accuracy in monocular visual odometry.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yongyuan Jiang, Yongyuan Jiang, Tongwei Lu, Tongwei Lu, Yao Zhang, Yao Zhang, Shihui Ai, Shihui Ai, } "Monocular visual odometry-based 3D-2D motion estimation", Proc. SPIE 10608, MIPPR 2017: Automatic Target Recognition and Navigation, 106080K (19 February 2018); doi: 10.1117/12.2286251; https://doi.org/10.1117/12.2286251
PROCEEDINGS
6 PAGES


SHARE
Back to Top