Automatic Musical Composition System Based on Emotion Recognition by Face Images

  • MAEDA Yoichiro
    College of Information Science and Engineering, Ritsumeikan University
  • FUJITA Hibiki
    Department of Sound Director and Visual Art Production, Institute of Sound Arts
  • KAMEI Katsuari
    College of Information Science and Engineering, Ritsumeikan University
  • COOPER Eric W.
    College of Information Science and Engineering, Ritsumeikan University

Bibliographic Information

Other Title
  • 顔画像による情動認識に基づくBGM自動作曲システム
  • カオ ガゾウ ニ ヨル ジョウドウ ニンシキ ニ モトズク BGM ジドウ サッキョク システム

Search this article

Description

<p>The effect of music on human emotion has been studied for a long time. Research on emotions for music, for example the research on such as feelings and impressions when listening to music, has been established as one research field. However, although there were many studies that cause an emotion from a music, few researches on creating a music from an emotion have been performed.</p><p>Therefore, in this study, we focus on facial expressions as emotional representation and aim to create a music that matches the emotion recognized from a facial image. For example, the system, which generates a bright and pleasant music using a laughing face image, or a dark and sad music using a crying face image automatically, will be constructed. Russell’s circumplex model was used for emotion recognition, and Hevner’s circular scale was used to generate music corresponding to these emotions. By using this system, for example, it will become possible to create a suitable BGM for the scene with only the actor’s face image in the production of movies. In this study, the above-mentioned system was constructed and the efficiency of this system was confirmed by conducting the Kansei evaluation experiment.</p>

Journal

References(8)*help

See more

Details 詳細情報について

Report a problem

Back to top