12 January 2018 A motion correction method for indoor robot based on lidar feature extraction and matching
Author Affiliations +
Abstract
For robots used for the indoor environment detection, positioning and navigation with a Light Detection and Ranging system (Lidar), the accuracy of map building, positioning and navigation, is largely restricted by the motion accuracy. Due to manufacture error and transmission error of the mechanical structure, sensors easily affected by the environment and other factors, robots’ cumulative motion error is inevitable. This paper presents a series of methods and solutions to overcome those problems, such as point set partition and feature extraction methods for processing Lidar scan points, feature matching method to correct the motion process, with less computation, more reasonable and rigorous threshold, wider scope of application, higher efficiency and accuracy. While extracting environment features and building indoor maps, these methods analyze the motion error of the robot and correct it, improving the accuracy of movement and map without any additional hardware. Experiments prove that the rotation error and translation error of the robot platform used in experiments can by reduced by 50% and by 70% respectively. The methods evidently improve the motion accuracy with a strong effectiveness and practicality.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jiansong Gou, Yu Guo, Yang Wei, Zheng Li, Yeming Zhao, Lirong Wang, Xiaohe Chen, "A motion correction method for indoor robot based on lidar feature extraction and matching", Proc. SPIE 10621, 2017 International Conference on Optical Instruments and Technology: Optoelectronic Measurement Technology and Systems, 106210X (12 January 2018); doi: 10.1117/12.2288067; https://doi.org/10.1117/12.2288067
PROCEEDINGS
11 PAGES


SHARE
Back to Top