17 May 2016 Hand gesture recognition in confined spaces with partial observability and occultation constraints
Author Affiliations +
Abstract
Human activity detection and recognition capabilities have broad applications for military and homeland security. These tasks are very complicated, however, especially when multiple persons are performing concurrent activities in confined spaces that impose significant obstruction, occultation, and observability uncertainty. In this paper, our primary contribution is to present a dedicated taxonomy and kinematic ontology that are developed for in-vehicle group human activities (IVGA). Secondly, we describe a set of hand-observable patterns that represents certain IVGA examples. Thirdly, we propose two classifiers for hand gesture recognition and compare their performance individually and jointly. Finally, we present a variant of Hidden Markov Model for Bayesian tracking, recognition, and annotation of hand motions, which enables spatiotemporal inference to human group activity perception and understanding. To validate our approach, synthetic (graphical data from virtual environment) and real physical environment video imagery are employed to verify the performance of these hand gesture classifiers, while measuring their efficiency and effectiveness based on the proposed Hidden Markov Model for tracking and interpreting dynamic spatiotemporal IVGA scenarios.
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Amir Shirkhodaie, Amir Shirkhodaie, Alex Chan, Alex Chan, Shuowen Hu, Shuowen Hu, } "Hand gesture recognition in confined spaces with partial observability and occultation constraints", Proc. SPIE 9842, Signal Processing, Sensor/Information Fusion, and Target Recognition XXV, 984214 (17 May 2016); doi: 10.1117/12.2226024; https://doi.org/10.1117/12.2226024
PROCEEDINGS
12 PAGES


SHARE
Back to Top