29 August 2016 Using triplet loss to generate better descriptors for 3D object retrieval
Author Affiliations +
Proceedings Volume 10033, Eighth International Conference on Digital Image Processing (ICDIP 2016); 100335P (2016) https://doi.org/10.1117/12.2243781
Event: Eighth International Conference on Digital Image Processing (ICDIP 2016), 2016, Chengu, China
Abstract
This paper investigates the 3D object retrieval problem by adapting a Convolution Network and introduce triplet loss into the training process of network. The 3D objects are converted to vexolized volumetric grids and then fed into the network. The outputs from the first full connection layer are taken as the 3D object descriptors. Triplet loss is designed to make the learned descriptors more suitable for retrieval. Experiments demonstrate that our descriptors are distinctive for objects from different categories and similar among those from the same category. It is much better than traditional handcrafted features like SPH and LFD. The superiority over another deep network based method ShapeNets validates the effectiveness of the triplet loss in driving same-class descriptors to assemble and different-class ones to disperse.
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Haowen Deng, Haowen Deng, Lei Luo, Lei Luo, Mei Wen, Mei Wen, Chunyuan Zhang, Chunyuan Zhang, } "Using triplet loss to generate better descriptors for 3D object retrieval", Proc. SPIE 10033, Eighth International Conference on Digital Image Processing (ICDIP 2016), 100335P (29 August 2016); doi: 10.1117/12.2243781; https://doi.org/10.1117/12.2243781
PROCEEDINGS
5 PAGES


SHARE
Back to Top