Multimodal Evoked Emotion Prediction and its Application to ASMR Video Analysis

DOI
  • YANG Yu
    The University of Electro-Communications
  • HIEIDA Chie
    Nara Institute of Science and Technology
  • HORII Takato
    Osaka University International Research Center for Neurointelligence, The University of Tokyo
  • NAGAI Takayuki
    The University of Electro-Communications Osaka University

Bibliographic Information

Other Title
  • マルチモーダル感情喚起推定とASMR動画解析への応用

Abstract

<p>With the development of digital terminals such as smartphones and tablets,videos that available for users to watch has reached to an enormous amount. In this context, applications such as classification, retrieval and distribution of personalized video content to meet the consumer needs remains a challenge. In general, humans tend to choose movies and music based on emotional characteristics. Therefore, evoked emotion analyzing may provide a guideline for this task. Emotions evoked by video are related to both audio and video modalities. In this study, we propose a deep learning model that estimates movie-evoked emotion by integrating multimodal information. Experiments using a movie database verify the change in estimation performance due to the integration of multimodal information, and show that the accuracy is improved compared to the conventional method. In addition, we analyze Autonomous Sensory Meridian Response (ASMR) videos, which have recently become a hot topic, and examine the relationship between evoked emotion and viewer behavior such as the number of views and likes/dislikes rates.</p>

Journal

Details 詳細情報について

Report a problem

Back to top