PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The solution to the problem of recognizing human actions on video sequences is one of the key areas on the path to the development and implementation of computer vision systems in various spheres of life. At the same time, additional sources of information (such as depth sensors, thermal sensors) allow to get more informative features and thus increase the reliability and stability of recognition. In this research, we focus on how to combine the multi-level decompression for depth and color information to improve the state of art action recognition methods. We present the algorithm, combining information from visible cameras and depth sensors based on the deep learning and PLIP model (parameterized model of logarithmic image processing) close to the human visual system's perception. The experiment results on the test dataset confirmed the high efficiency of the proposed action recognition method compared to the state-of-the-art methods that used only one modality image (visible or depth).
A. Zelensky,V. Voronin,M. Zhdanova,N. Gapon,O. Tokareva, andE. Semenishchev
"Multi-level deep learning depth and color fusion for action recognition", Proc. SPIE 12138, Optics, Photonics and Digital Technologies for Imaging Applications VII, 121380Y (17 May 2022); https://doi.org/10.1117/12.2626000
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
A. Zelensky, V. Voronin, M. Zhdanova, N. Gapon, O. Tokareva, E. Semenishchev, "Multi-level deep learning depth and color fusion for action recognition," Proc. SPIE 12138, Optics, Photonics and Digital Technologies for Imaging Applications VII, 121380Y (17 May 2022); https://doi.org/10.1117/12.2626000