3 February 2014 Motion lecture annotation system to learn Naginata performances
Author Affiliations +
Abstract
This paper describes a learning assistant system using motion capture data and annotation to teach “Naginata-jutsu” (a skill to practice Japanese halberd) performance. There are some video annotation tools such as YouTube. However these video based tools have only single angle of view. Our approach that uses motion-captured data allows us to view any angle. A lecturer can write annotations related to parts of body. We have made a comparison of effectiveness between the annotation tool of YouTube and the proposed system. The experimental result showed that our system triggered more annotations than the annotation tool of YouTube.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Daisuke Kobayashi, Daisuke Kobayashi, Ryota Sakamoto, Ryota Sakamoto, Yoshihiko Nomura, Yoshihiko Nomura, } "Motion lecture annotation system to learn Naginata performances", Proc. SPIE 9025, Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques, 90250F (3 February 2014); doi: 10.1117/12.2041630; https://doi.org/10.1117/12.2041630
PROCEEDINGS
7 PAGES


SHARE
RELATED CONTENT

Obtaining 3D shape of potted plants and modeling
Proceedings of SPIE (April 21 1995)
Semantic perception for ground robotics
Proceedings of SPIE (May 25 2012)
3D knee-motion tracking from sequences of radiographs
Proceedings of SPIE (May 20 1999)
Body language user interface (BLUI)
Proceedings of SPIE (July 17 1998)

Back to Top