21 May 2015 The Johns Hopkins University multimodal dataset for human action recognition
Author Affiliations +
Abstract
The Johns Hopkins University MultiModal Action (JHUMMA) dataset contains a set of twenty-one actions recorded with four sensor systems in three different modalities. The data was collected with a data acquisition system that includes three independent active sonar devices at three different frequencies and a Microsoft Kinect sensor that provides both RGB and Depth data. We have developed algorithms for human action recognition from active acoustics and provide benchmark baseline recognition performance results.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Thomas S. Murray, Thomas S. Murray, Daniel R. Mendat, Daniel R. Mendat, Philippe O. Pouliquen, Philippe O. Pouliquen, Andreas G. Andreou, Andreas G. Andreou, } "The Johns Hopkins University multimodal dataset for human action recognition", Proc. SPIE 9461, Radar Sensor Technology XIX; and Active and Passive Signatures VI, 94611U (21 May 2015); doi: 10.1117/12.2189349; https://doi.org/10.1117/12.2189349
PROCEEDINGS
16 PAGES


SHARE
Back to Top