14 May 2017 Spatial and temporal segmented dense trajectories for gesture recognition
Author Affiliations +
Proceedings Volume 10338, Thirteenth International Conference on Quality Control by Artificial Vision 2017; 103380F (2017) https://doi.org/10.1117/12.2266859
Event: The International Conference on Quality Control by Artificial Vision 2017, 2017, Tokyo, Japan
Abstract
Recently, dense trajectories [1] have been shown to be a successful video representation for action recognition, and have demonstrated state-of-the-art results with a variety of datasets. However, if we apply these trajectories to gesture recognition, recognizing similar and fine-grained motions is problematic. In this paper, we propose a new method in which dense trajectories are calculated in segmented regions around detected human body parts. Spatial segmentation is achieved by body part detection [2]. Temporal segmentation is performed for a fixed number of video frames. The proposed method removes background video noise and can recognize similar and fine-grained motions. Only a few video datasets are available for gesture classification; therefore, we have constructed a new gesture dataset and evaluated the proposed method using this dataset. The experimental results show that the proposed method outperforms the original dense trajectories.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kaho Yamada, Kaho Yamada, Takeshi Yoshida, Takeshi Yoshida, Kazuhiko Sumi, Kazuhiko Sumi, Hitoshi Habe, Hitoshi Habe, Ikuhisa Mitsugami, Ikuhisa Mitsugami, } "Spatial and temporal segmented dense trajectories for gesture recognition", Proc. SPIE 10338, Thirteenth International Conference on Quality Control by Artificial Vision 2017, 103380F (14 May 2017); doi: 10.1117/12.2266859; https://doi.org/10.1117/12.2266859
PROCEEDINGS
8 PAGES


SHARE
RELATED CONTENT


Back to Top