2 April 2020 Emotion recognition model based on the Dempster–Shafer evidence theory
Qihua Xu, Chunyue Zhang, Bo Sun
Author Affiliations +
Abstract

Automatic emotion recognition for video clips has become a popular area of research in recent years. Previous studies have explored emotion recognition methods through monomodal approaches, such as voice, text, facial expression, and physiological information. We focus on the complementarity of the information and construct an automatic emotion recognition model based on deep learning technology and multimodal fusion strategy. In this model, visual features, audio features, and text features are extracted from the video clips. A decision-level fusion strategy, based on the theory of evidence, is proposed to fuse the multiple classification results. To solve the problem of evidence conflict in evidence theory, we study a compatibility algorithm designed to correct conflicting evidence based on the similarity matrix of the evidence. This approach is shown to improve the accuracy of emotion recognition.

© 2020 SPIE and IS&T 1017-9909/2020/$28.00 © 2020 SPIE and IS&T
Qihua Xu, Chunyue Zhang, and Bo Sun "Emotion recognition model based on the Dempster–Shafer evidence theory," Journal of Electronic Imaging 29(2), 023018 (2 April 2020). https://doi.org/10.1117/1.JEI.29.2.023018
Received: 26 November 2019; Accepted: 23 March 2020; Published: 2 April 2020
Lens.org Logo
CITATIONS
Cited by 10 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Feature extraction

Video

Databases

Sun

Convolution

Facial recognition systems

Neural networks

Back to Top