1 September 2010 Simultaneous 3-D human-motion tracking and voxel reconstruction
Author Affiliations +
Optical Engineering, 49(9), 097201 (2010). doi:10.1117/1.3488040
Under the assumption that no dynamic prior is given, the main obstacle to practical human-motion tracking is the high number of dimensions associated with a 3-D articulated full-body model. We present a method for 3-D human-motion tracking when the training data are unavailable in advance and no motion pattern prior is assumed. Based on the framework of the annealed particle filter, the proposed algorithm incrementally learns the eigenspace as a compact representation of motion patterns and efficiently adapts to pose changes. The model updates using the principal-component analysis, and introduces a forgetting factor to avoid overfitting. In addition, the likelihood measure is modeled by minimizing a cost function on the 3-D Markov random field (MRF), which integrates the information from visual hull and shape priors. A dynamic graph cut is performed to speed up the minimization process. As a result, the proposed approach is capable of obtaining the pose in parallel with the voxel data. Experimental results suggest that our method performs online tracking robustly and generates reconstructions beyond the shape from silhouette method from sparse camera views.
Junchi Yan, Jian Song, Yuncai Liu, "Simultaneous 3-D human-motion tracking and voxel reconstruction," Optical Engineering 49(9), 097201 (1 September 2010). https://doi.org/10.1117/1.3488040

3D modeling

Motion models


Detection and tracking algorithms

Optical tracking


Magnetorheological finishing


A new algorithm of laser 3D visualization based on space...
Proceedings of SPIE (December 19 2013)
View interpolation and synthesis by KLT feature tracker
Proceedings of SPIE (January 17 2005)
Model-based vision for car following
Proceedings of SPIE (August 20 1993)
Novel approach to visual robot control
Proceedings of SPIE (October 03 1995)

Back to Top