Multimodal Emotion Recognition Using Non-Inertial Loss Function
-
- Orgil Jargalsaikhan
- Graduate School of Technology, Industrial and Social Sciences, Tokushima University
-
- Karungaru Stephen
- Graduate School of Technology, Industrial and Social Sciences, Tokushima University
-
- Terada Kenji
- Graduate School of Technology, Industrial and Social Sciences, Tokushima University
-
- Shagdar Ganbold
- Graduate School of Information and Communication Technology, Mongolian University of Science Technology
Description
<p>Automatic understanding of human emotion in a wild setting using audiovisual signals is extremely challenging. Latent continuous dimensions can be used to accomplish the analysis of human emotional states, behaviors, and reactions displayed in real-world settings. Moreover, Valence and Arousal combinations constitute well-known and effective representations of emotions. In this paper, a new Non-inertial loss function is proposed to train emotion recognition deep learning models. It is evaluated in wild settings using four types of candidate networks with different pipelines and sequence lengths. It is then compared to the Concordance Correlation Coefficient (CCC) and Mean Squared Error (MSE) losses commonly used for training. To prove its effectiveness on efficiency and stability in continuous or non-continuous input data, experiments were performed using the Aff-Wild dataset. Encouraging results were obtained.</p>
Journal
-
- Journal of Signal Processing
-
Journal of Signal Processing 25 (2), 73-85, 2021-03-01
Research Institute of Signal Processing, Japan
- Tweet
Details 詳細情報について
-
- CRID
- 1390850247496582016
-
- NII Article ID
- 130007993257
-
- ISSN
- 18801013
- 13426230
-
- Text Lang
- en
-
- Data Source
-
- JaLC
- Crossref
- CiNii Articles
- OpenAIRE
-
- Abstract License Flag
- Disallowed