1 May 2017 Three-dimensional scene reconstruction from a two-dimensional image
Author Affiliations +
Abstract
We propose and simulate a method of reconstructing a three-dimensional scene from a two-dimensional image for developing and augmenting world models for autonomous navigation. This is an extension of the Perspective-n-Point (PnP) method which uses a sampling of the 3D scene, 2D image point parings, and Random Sampling Consensus (RANSAC) to infer the pose of the object and produce a 3D mesh of the original scene. Using object recognition and segmentation, we simulate the implementation on a scene of 3D objects with an eye to implementation on embeddable hardware. The final solution will be deployed on the NVIDIA Tegra platform.
Conference Presentation
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Franz Parkins, Franz Parkins, Eddie Jacobs, Eddie Jacobs, } "Three-dimensional scene reconstruction from a two-dimensional image", Proc. SPIE 10199, Geospatial Informatics, Fusion, and Motion Video Analytics VII, 1019909 (1 May 2017); doi: 10.1117/12.2266411; https://doi.org/10.1117/12.2266411
PROCEEDINGS
7 PAGES + PRESENTATION

SHARE
Back to Top