Translator Disclaimer
13 May 2019 Human stress detection from the speech in danger situation
Author Affiliations +
Abstract
Besides facial expression or gestures, human speech is still the main channel of communication in ordinary human life. In addition to speech content, this signal also contains additional source / human status information. Gender, age, but also the emotional state of man can be extracted from spoken speech. This research is focused on the classification of the emotional state of man, the stress in particular. Accordingly, we have created a speech database of emergency phone calls. The database contains recordings of the Integrated Rescue System (IRS) of 112 emergency line from Czech Republic. It was designed to detect the stress from the human voice. Due to the detection of stress from a neutral (resting) state, the database was divided into neutral speech and human speech in stress. The neutral subgroup consists of voice recordings of the IRS operator. The stress subgroup is made up of people in danger. We have deliberately selected events with great stressful stimuli such as car accident, domestic violence, situations close to death, and so on. The speech signal is then pre-processed and analyzed for the feature extraction. The feature vectors represents classifier input data. Old-fashioned classification methods such as Support Vector Machine (SVM) or k-Nearest Neighbors (k-NN) classifiers and new artificial intelligence methods such as Convolutional Neural Networks (CNN) are used to detect and recognize human stress. The applications of achieved results are broad: from phone services through Smart Health to security components analysis.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Pavol Partila, Jaromir Tovarek, Jan Rozhon, and Jakub Jalowiczor "Human stress detection from the speech in danger situation", Proc. SPIE 10993, Mobile Multimedia/Image Processing, Security, and Applications 2019, 109930U (13 May 2019); https://doi.org/10.1117/12.2521405
PROCEEDINGS
7 PAGES


SHARE
Advertisement
Advertisement
Back to Top