You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
1 June 2016iGRaND: an invariant frame for RGBD sensor feature detection and descriptor extraction with applications
This article describes a new 3D RGBD image feature, referred to as iGRaND, for use in real-time systems that use these sensors for tracking, motion capture, or robotic vision applications. iGRaND features use a novel local reference frame derived from the image gradient and depth normal (hence iGRaND) that is invariant to scale and viewpoint for Lambertian surfaces. Using this reference frame, Euclidean invariant feature components are computed at keypoints which fuse local geometric shape information with surface appearance information. The performance of the feature for real-time odometry is analyzed and its computational complexity and accuracy is compared with leading alternative 3D features.
Andrew R. Willis andKevin M. Brink
"iGRaND: an invariant frame for RGBD sensor feature detection and descriptor extraction with applications", Proc. SPIE 9867, Three-Dimensional Imaging, Visualization, and Display 2016, 98670P (1 June 2016); https://doi.org/10.1117/12.2225540
The alert did not successfully save. Please try again later.
Andrew R. Willis, Kevin M. Brink, "iGRaND: an invariant frame for RGBD sensor feature detection and descriptor extraction with applications," Proc. SPIE 9867, Three-Dimensional Imaging, Visualization, and Display 2016, 98670P (1 June 2016); https://doi.org/10.1117/12.2225540