Attention-Based Dense LSTM for Speech Emotion Recognition

  • XIE Yue
    School of Information Science and Engineering, Southeast University
  • LIANG Ruiyu
    School of Communication Engineering, Nanjing Institute of Technology
  • LIANG Zhenlin
    School of Information Science and Engineering, Southeast University
  • ZHAO Li
    School of Information Science and Engineering, Southeast University

この論文をさがす

説明

<p>Despite the widespread use of deep learning for speech emotion recognition, they are severely restricted due to the information loss in the high layer of deep neural networks, as well as the degradation problem. In order to efficiently utilize information and solve degradation, attention-based dense long short-term memory (LSTM) is proposed for speech emotion recognition. LSTM networks with the ability to process time series such as speech are constructed into which attention-based dense connections are introduced. That means the weight coefficients are added to skip-connections of each layer to distinguish the difference of the emotional information between layers and avoid the interference of redundant information from the bottom layer to the effective information from the top layer. The experiments demonstrate that proposed method improves the recognition performance by 12% and 7% on eNTERFACE and IEMOCAP corpus respectively.</p>

収録刊行物

被引用文献 (2)*注記

もっと見る

参考文献 (11)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ