Feature Acquisition and Analysis for Facial Expression Recognition Using Convolutional Neural Networks

  • Nishime Taiki
    Graduate School of Information Engineering, University of The Ryukyus
  • Endo Satoshi
    School of Information Engineering, University of The Ryukyus
  • Toma Naruaki
    School of Information Engineering, University of The Ryukyus
  • Yamada Koji
    School of Information Engineering, University of The Ryukyus
  • Akamine Yuhei
    School of Information Engineering, University of The Ryukyus

Bibliographic Information

Other Title
  • 畳み込みニューラルネットワークを用いた表情表現の獲得と顔特徴量の分析

Abstract

<p>Facial expressions play an important role in communication as much as words. In facial expression recognition by human, it is difficult to uniquely judge, because facial expression has the sway of recognition by individual difference and subjective recognition. Therefore, it is difficult to evaluate the reliability of the result from recognition accuracy alone, and the analysis for explaining the result and feature learned by Convolutional Neural Networks (CNN) will be considered important. In this study, we carried out the facial expression recognition from facial expression images using CNN. In addition, we analysed CNN for understanding learned features and prediction results. Emotions we focused on are “happiness”, “sadness”, “surprise”, “anger”, “disgust”, “fear” and “neutral”. As a result, using 32286 facial expression images, have obtained an emotion recognition score of about 57%; for two emotions (Happiness, Surprise) the recognition score exceeded 70%, but Anger and Fear was less than 50%. In the analysis of CNN, we focused on the learning process, input and intermediate layer. Analysis of the learning progress confirmed that increased data can be recognised in the following order “happiness”, “surprise”, “neutral”, “anger”, “disgust”, “sadness” and “fear”. From the analysis result of the input and intermediate layer, we confirmed that the feature of the eyes and mouth strongly influence the facial expression recognition, and intermediate layer neurons had active patterns corresponding to facial expressions, and also these activate patterns do not respond to partial features of facial expressions. From these results, we concluded that CNN has learned the partial features of eyes and mouth from input, and recognise the facial expression using hidden layer units having the area corresponding to each facial expression.</p>

Journal

Citations (4)*help

See more

References(11)*help

See more

Details 詳細情報について

Report a problem

Back to top