マルチモーダル深層学習を用いた街並み画像に対する人間の振る舞い予測

書誌事項

タイトル別名
  • HUMAN BEHAVIOR PREDICTION FOR CITYSCAPE IMAGES USING MULTIMODAL DEEP LEARNING
  • - For prediction of gazing tendency and prediction of willingness to visit using multi- channelal data with results -
  • -注視点傾向予測及び結果を付与した多次元データによる訪問意欲予測を対象に-

この論文をさがす

説明

<p>This study aimed to estimate human willingness to visit cityscape images via artificial intelligence (AI) using multimodal deep learning. In this study, gaze information was acquired through subject experiments using a measurement device. We added gaze information when humans felt motivated to visit the cityscape image, and confirmed whether the estimation accuracy of AI would improve. We also created an AI model that generated gaze-view images, and used it for multimodal deep learning. We used pix2pix to generate the images. Finally, we verified the accuracy of the proposed multimodal deep learning approach, when the generated pseudo-gaze image was attached.</p>

収録刊行物

被引用文献 (1)*注記

もっと見る

参考文献 (9)*注記

もっと見る

関連プロジェクト

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ