Paper
3 February 2014 Motion lecture annotation system to learn Naginata performances
Daisuke Kobayashi, Ryota Sakamoto, Yoshihiko Nomura
Author Affiliations +
Proceedings Volume 9025, Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques; 90250F (2014) https://doi.org/10.1117/12.2041630
Event: IS&T/SPIE Electronic Imaging, 2014, San Francisco, California, United States
Abstract
This paper describes a learning assistant system using motion capture data and annotation to teach “Naginata-jutsu” (a skill to practice Japanese halberd) performance. There are some video annotation tools such as YouTube. However these video based tools have only single angle of view. Our approach that uses motion-captured data allows us to view any angle. A lecturer can write annotations related to parts of body. We have made a comparison of effectiveness between the annotation tool of YouTube and the proposed system. The experimental result showed that our system triggered more annotations than the annotation tool of YouTube.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Daisuke Kobayashi, Ryota Sakamoto, and Yoshihiko Nomura "Motion lecture annotation system to learn Naginata performances", Proc. SPIE 9025, Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques, 90250F (3 February 2014); https://doi.org/10.1117/12.2041630
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Motion measurement

3D modeling

Motion models

Human-machine interfaces

Image segmentation

Sensors

RELATED CONTENT

Semantic perception for ground robotics
Proceedings of SPIE (May 25 2012)
3D knee-motion tracking from sequences of radiographs
Proceedings of SPIE (May 20 1999)
Body language user interface (BLUI)
Proceedings of SPIE (July 17 1998)

Back to Top