31 January 2018 Multispectral embedding-based deep neural network for three-dimensional human pose recovery
Jialin Yu, Jifeng Sun
Author Affiliations +
Abstract
Monocular image-based three-dimensional (3-D) human pose recovery aims to retrieve 3-D poses using the corresponding two-dimensional image features. Therefore, the pose recovery performance highly depends on the image representations. We propose a multispectral embedding-based deep neural network (MSEDNN) to automatically obtain the most discriminative features from multiple deep convolutional neural networks and then embed their penultimate fully connected layers into a low-dimensional manifold. This compact manifold can explore not only the optimum output from multiple deep networks but also the complementary properties of them. Furthermore, the distribution of each hierarchy discriminative manifold is sufficiently smooth so that the training process of our MSEDNN can be effectively implemented only using few labeled data. Our proposed network contains a body joint detector and a human pose regressor that are jointly trained. Extensive experiments conducted on four databases show that our proposed MSEDNN can achieve the best recovery performance compared with the state-of-the-art methods.
© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2018/$25.00 © 2018 SPIE
Jialin Yu and Jifeng Sun "Multispectral embedding-based deep neural network for three-dimensional human pose recovery," Optical Engineering 57(1), 013107 (31 January 2018). https://doi.org/10.1117/1.OE.57.1.013107
Received: 1 November 2017; Accepted: 9 January 2018; Published: 31 January 2018
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D modeling

Data modeling

Neural networks

3D image processing

Sun

Motion models

Optical engineering

RELATED CONTENT


Back to Top