-
- Tomomi Takahashi
- Kyoto Institute of Technology, Japan
-
- Kazuaki Tanaka
- Kyoto Institute of Technology, Japan
-
- Kenichiro Kobayashi
- TIS Inc., Japan
-
- Natsuki Oka
- Kyoto Institute of Technology, Japan
説明
Many people, especially Japanese, are embarrassed to converse with agents such as virtual assistants, probably due to a low social presence, which refers to the degree to which one perceives the human-like properties of an agent. We assumed that poor emotional expressions of agents may impair their human-likeness. In this study, we proposed melodic emotional expression (MEE), which is a new auditory emotional expression for spoken dialog agents. We added background music (BGM) and sound effects as MEE to synthetic voices and conducted experiments to investigate the effects. First, we found that adding MEE to the flat synthetic voice could convey emotions as intended. We also found that when positive emotions were expressed by MEE, it made the agent more human-like and easier to talk to. Furthermore, we achieved these effects when MEE was added to an emotional synthetic voice. These effects were particularly noticeable with the BGM. We further attempted automatic BGM generation, which is necessary for the practical application of MEE. Listeners accurately categorized the BGM generated by the prototype system into four types of emotions: joy, angry, sad, and relaxed.
収録刊行物
-
- Proceedings of the 9th International Conference on Human-Agent Interaction
-
Proceedings of the 9th International Conference on Human-Agent Interaction 84-92, 2021-11-09
ACM
- Tweet
詳細情報 詳細情報について
-
- CRID
- 1360294643866201344
-
- 資料種別
- journal article
-
- データソース種別
-
- Crossref
- KAKEN
- OpenAIRE