Translator Disclaimer
2 February 2012 A new point process model for trajectory-based events annotation
Author Affiliations +
Abstract
Human actions annotation in videos has received an increase attention from the scientific community these last years mainly due to its large implication in many computer vision applications. The current leading paradigm to perform human actions annotation is based on local features. Local features robust to geometric transformations and occlusion are extracted from a video and aggregated to obtain a global video signature. However, current aggregation schemes such as Bag-of-Words or spatio-temporal grids have no or limited information about the local features spatio-temporal localization in videos. It has been shown that local features localization can be hepful for detecting a concept or an action. In this work we improve on the aggregation step by embedding local features spatio-temporal information in the final video representation by introducing a point process model. We propose an event recognition system involving two main steps: (1) local features extraction based on robust point trajectories, and (2) a global action representation capturing the spatio-temporal context information through an innovative point process clustering. A point process provides indeed a well-defined formalism to characterize local features localization along with their interactions information. Results are evaluated on the HOllywood in Human Action (HOHA) dataset showing an improvement over the state-of-art.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Nicolas Ballas, Bertrand Delezoide, and Françoise Prêteux "A new point process model for trajectory-based events annotation", Proc. SPIE 8300, Image Processing: Machine Vision Applications V, 83000B (2 February 2012); https://doi.org/10.1117/12.912088
PROCEEDINGS
12 PAGES


SHARE
Advertisement
Advertisement
Back to Top