Paper
27 March 2024 Comparative study of deep learning models for action recognition based on skeleton data
Zhanhao Liang, Kadyrkulova Kyial Kudayberdievna, Batyrkanov Jenish Isakunovich, Zhantu Liang
Author Affiliations +
Proceedings Volume 13105, International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2023); 131052P (2024) https://doi.org/10.1117/12.3026322
Event: 3rd International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2023), 2023, Qingdao, China
Abstract
In this study, we conducted a comparative analysis of three deep learning models—CNNs (90% accuracy), LSTMs (92% accuracy), and RNNs (95% accuracy)—for skeleton-based action recognition. The research focused on evaluating each model's ability to accurately classify a variety of human actions using a Kaggle dataset. Our findings revealed CNNs' strength in spatial recognition, LSTMs' proficiency in temporal dynamics, and RNNs' overall superiority in sequential processing. The study emphasizes the importance of model selection in action recognition tasks and lays groundwork for future exploration into hybrid models.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Zhanhao Liang, Kadyrkulova Kyial Kudayberdievna, Batyrkanov Jenish Isakunovich, and Zhantu Liang "Comparative study of deep learning models for action recognition based on skeleton data", Proc. SPIE 13105, International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2023), 131052P (27 March 2024); https://doi.org/10.1117/12.3026322
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Action recognition

Deep learning

Performance modeling

Education and training

Motion models

Image processing

Back to Top