Translator Disclaimer
1 September 2010 Simultaneous 3-D human-motion tracking and voxel reconstruction
Author Affiliations +
Abstract
Under the assumption that no dynamic prior is given, the main obstacle to practical human-motion tracking is the high number of dimensions associated with a 3-D articulated full-body model. We present a method for 3-D human-motion tracking when the training data are unavailable in advance and no motion pattern prior is assumed. Based on the framework of the annealed particle filter, the proposed algorithm incrementally learns the eigenspace as a compact representation of motion patterns and efficiently adapts to pose changes. The model updates using the principal-component analysis, and introduces a forgetting factor to avoid overfitting. In addition, the likelihood measure is modeled by minimizing a cost function on the 3-D Markov random field (MRF), which integrates the information from visual hull and shape priors. A dynamic graph cut is performed to speed up the minimization process. As a result, the proposed approach is capable of obtaining the pose in parallel with the voxel data. Experimental results suggest that our method performs online tracking robustly and generates reconstructions beyond the shape from silhouette method from sparse camera views.
©(2010) Society of Photo-Optical Instrumentation Engineers (SPIE)
Junchi Yan, Jian Song, and Yuncai Liu "Simultaneous 3-D human-motion tracking and voxel reconstruction," Optical Engineering 49(9), 097201 (1 September 2010). https://doi.org/10.1117/1.3488040
Published: 1 September 2010
JOURNAL ARTICLE
10 PAGES


SHARE
Advertisement
Advertisement
RELATED CONTENT

View interpolation and synthesis by KLT feature tracker
Proceedings of SPIE (January 17 2005)
A two-stage scheme for multi-view human pose estimation
Proceedings of SPIE (August 19 2010)
Uncertainty quantification for cooperative target tracking
Proceedings of SPIE (October 18 2004)
Body language user interface (BLUI)
Proceedings of SPIE (July 17 1998)

Back to Top