- 【Updated on May 12, 2025】 Integration of CiNii Dissertations and CiNii Books into CiNii Research
- Trial version of CiNii Research Knowledge Graph Search feature is available on CiNii Labs
- 【Updated on June 30, 2025】Suspension and deletion of data provided by Nikkei BP
- Regarding the recording of “Research Data” and “Evidence Data”
Description
Despite the recent advances in opinion mining for written reviews, few works have tackled the problem on other sources of reviews. In light of this issue, we propose a multi-modal approach for mining fine-grained opinions from video reviews that is able to determine the aspects of the item under review that are being discussed and the sentiment orientation towards them. Our approach works at the sentence level without the need for time annotations and uses features derived from the audio, video and language transcriptions of its contents. We evaluate our approach on two datasets and show that leveraging the video and audio modalities consistently provides increased performance over text-only baselines, providing evidence these extra modalities are key in better understanding video reviews.
Second Grand Challenge and Workshop on Multimodal Language ACL 2020
Journal
-
- Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML)
-
Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML) 8-18, 2020-01-01
Association for Computational Linguistics (ACL)
- Tweet
Keywords
Details 詳細情報について
-
- CRID
- 1870302167983779072
-
- Data Source
-
- OpenAIRE