From Event: SPIE Commercial + Scientific Sensing and Imaging, 2017
This paper presents an extension to our previously developed fusion framework  involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
Chen Chen, Huiyan Hao, Roozbeh Jafari, and Nasser Kehtarnavaz, "Weighted fusion of depth and inertial data to improve view invariance for real-time human action recognition," Proc. SPIE 10223, Real-Time Image and Video Processing 2017, 1022307 (Presented at SPIE Commercial + Scientific Sensing and Imaging: April 10, 2017; Published: 1 May 2017); https://doi.org/10.1117/12.2261823.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the conference proceedings. They include the speaker's narration along with a video recording of the presentation slides and animations. Many conference presentations also include full-text papers. Search and browse our growing collection of more than 14,000 conference presentations, including many plenary and keynote presentations.
Study of self-shadowing effect as a simple means to realize nanostructured thin films and layers with special attentions to birefringent obliquely deposited thin films and photo-luminescent porous silicon