1 May 2017 Weighted fusion of depth and inertial data to improve view invariance for real-time human action recognition
Author Affiliations +
Abstract
This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
Conference Presentation
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chen Chen, Chen Chen, Huiyan Hao, Huiyan Hao, Roozbeh Jafari, Roozbeh Jafari, Nasser Kehtarnavaz, Nasser Kehtarnavaz, } "Weighted fusion of depth and inertial data to improve view invariance for real-time human action recognition", Proc. SPIE 10223, Real-Time Image and Video Processing 2017, 1022307 (1 May 2017); doi: 10.1117/12.2261823; https://doi.org/10.1117/12.2261823
PROCEEDINGS
9 PAGES + PRESENTATION

SHARE
RELATED CONTENT


Back to Top