27 October 2017 Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation
Author Affiliations +
Abstract
Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.
© 2017 SPIE and IS&T
Jialin Yu, Jialin Yu, Jifeng Sun, Jifeng Sun, Shasha Luo, Shasha Luo, Bichao Duan, Bichao Duan, } "Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation," Journal of Electronic Imaging 26(5), 053026 (27 October 2017). https://doi.org/10.1117/1.JEI.26.5.053026 . Submission: Received: 17 June 2017; Accepted: 3 October 2017
Received: 17 June 2017; Accepted: 3 October 2017; Published: 27 October 2017
JOURNAL ARTICLE
13 PAGES


SHARE
Back to Top