Multimodal Emotion Recognition Using Non-Inertial Loss Function

DOI Web Site 参考文献16件 オープンアクセス
  • Orgil Jargalsaikhan
    Graduate School of Technology, Industrial and Social Sciences, Tokushima University
  • Karungaru Stephen
    Graduate School of Technology, Industrial and Social Sciences, Tokushima University
  • Terada Kenji
    Graduate School of Technology, Industrial and Social Sciences, Tokushima University
  • Shagdar Ganbold
    Graduate School of Information and Communication Technology, Mongolian University of Science Technology

説明

<p>Automatic understanding of human emotion in a wild setting using audiovisual signals is extremely challenging. Latent continuous dimensions can be used to accomplish the analysis of human emotional states, behaviors, and reactions displayed in real-world settings. Moreover, Valence and Arousal combinations constitute well-known and effective representations of emotions. In this paper, a new Non-inertial loss function is proposed to train emotion recognition deep learning models. It is evaluated in wild settings using four types of candidate networks with different pipelines and sequence lengths. It is then compared to the Concordance Correlation Coefficient (CCC) and Mean Squared Error (MSE) losses commonly used for training. To prove its effectiveness on efficiency and stability in continuous or non-continuous input data, experiments were performed using the Aff-Wild dataset. Encouraging results were obtained.</p>

収録刊行物

  • 信号処理

    信号処理 25 (2), 73-85, 2021-03-01

    信号処理学会

参考文献 (16)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ