10 May 2017 3D indoor scene reconstruction and change detection for robotic sensing and navigation
Author Affiliations +
A new methodology for 3D change detection which can support effective robot sensing and navigation in a reconstructed indoor environment is presented in this paper. We register the RGB-D images acquired with an untracked camera into a globally consistent and accurate point-cloud model. This paper introduces a robust system that detects camera position for multiple RGB video frames by using both photo-metric error and feature based method. It utilizes the iterative closest point (ICP) algorithm to establish geometric constraints between the point-cloud as they become aligned. For the change detection part, a bag-of-word (DBoW) model is used to match the current frame with the previous key frames based on RGB images with Oriented FAST and Rotated BRIEF (ORB) feature. Then combine the key-frame translation and ICP to align the current point-cloud with reconstructed 3D scene to localize the robot position. Meanwhile, camera position and orientation are used to aid robot navigation. After preprocessing the data, we create an Octomap Model to detect the scene change measurements. The experimental evaluations performed to evaluate the capability of our algorithm show that the robot's location and orientation are accurately determined and provide promising results for change detection indicating all the object changes with very limited false alarm rate.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ruixu Liu, Ruixu Liu, Vijayan K. Asari, Vijayan K. Asari, } "3D indoor scene reconstruction and change detection for robotic sensing and navigation", Proc. SPIE 10221, Mobile Multimedia/Image Processing, Security, and Applications 2017, 102210D (10 May 2017); doi: 10.1117/12.2262831; https://doi.org/10.1117/12.2262831


Back to Top