Emotion Recognition using Mel-Frequency Cepstral Coefficients

この論文をさがす

抄録

In this paper, we propose a new approach to emotion recognition. Prosodic features are currently used in most emotion recognition algorithms. However, emotion recognition algorithms using prosodic features are not sufficiently accurate. Therefore, we focused on the phonetic features of speech for emotion recognition. In particular, we describe the effectiveness of Mel-frequency Cepstral Coefficients (MFCCs) as the feature for emotion recognition. We focus on the precise classification of MFCC feature vectors, rather than their dynamic nature over an utterance. To realize such an approach, the proposed algorithm employs multi-template emotion classification of the analysis frames. Experimental evaluations show that the proposed algorithm produces 66.4% recognition accuracy in speaker-independent emotion recognition experiments for four specific emotions. This recognition accuracy is higher than the accuracy obtained by the conventional prosody-based and MFCC-based emotion recognition algorithms, which confirms the potential of the proposed algorithm.

収録刊行物

  • 自然言語処理

    自然言語処理 14 (4), 83-96, 2007

    一般社団法人 言語処理学会

被引用文献 (2)*注記

もっと見る

参考文献 (13)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ