Evaluation of co-speech gestures grounded in word-distributed representation
書誌事項
- 公開日
- 2024-04-25
- 資源種別
- journal article
- 権利情報
-
- https://creativecommons.org/licenses/by/4.0/
- DOI
-
- 10.3389/frobt.2024.1362463
- 公開者
- Frontiers Media SA
説明
<jats:p>The condition for artificial agents to possess perceivable intentions can be considered that they have resolved a form of the symbol grounding problem. Here, the symbol grounding is considered an achievement of the state where the language used by the agent is endowed with some quantitative meaning extracted from the physical world. To achieve this type of symbol grounding, we adopt a method for characterizing robot gestures with quantitative meaning calculated from word-distributed representations constructed from a large corpus of text. In this method, a “size image” of a word is generated by defining an axis (index) that discriminates the “size” of the word in the word-distributed vector space. The generated size images are converted into gestures generated by a physical artificial agent (robot). The robot’s gesture can be set to reflect either the size of the word in terms of the amount of movement or in terms of its posture. To examine the perception of communicative intention in the robot that performs the gestures generated as described above, the authors examine human ratings on “the naturalness” obtained through an online survey, yielding results that partially validate our proposed method. Based on the results, the authors argue for the possibility of developing advanced artifacts that achieve human-like symbolic grounding.</jats:p>
収録刊行物
-
- Frontiers in Robotics and AI
-
Frontiers in Robotics and AI 11 2024-04-25
Frontiers Media SA
- Tweet
キーワード
詳細情報 詳細情報について
-
- CRID
- 1360021390775713408
-
- ISSN
- 22969144
-
- 資料種別
- journal article
-
- データソース種別
-
- Crossref
- KAKEN
- OpenAIRE
