ポピュレーション符号化を利用した自他の動き等価性の早期発見による共同注意の学習

書誌事項

タイトル別名
  • Joint Attention Learning based on Early Detection of Self-Other Motion Equivalence with Population Codes
  • ポピュレーション フゴウカ オ リヨウシタ ジタ ノ ウゴキ トウカセイ ノ ソウキ ハッケン ニ ヨル キョウドウ チュウイ ノ ガクシュウ

この論文をさがす

抄録

This paper presents a robotic learning model for joint attention based on self-other motion equivalence. Joint attention is a type of imitation, by which a robot looks at the object that another person is looking at by producing an eye-head movement equivalent to the person's. It means that this ability can be acquired by detecting an equivalent relationship between the robot's movement and the person's. The model presented here enables a robot to detect the eye-head movement of a person as optical flow in the vision and the movement of its eyes and head as a motion vector in the somatic sense. Because both of the movements are represented with population codes, the robot can acquire the motion equivalence as simultaneous activations of homogeneous neurons that are responsible to a same motion direction in the two senses. Experimental results show that the model enables a robot to learn to establish joint attention based on the early detection of the self-other motion equivalence and that the equivalence is acquired in a well-structured visuomotor map. The results moreover provide analogies with the development of human infants, which indicates that the model might help to understand infant development.

収録刊行物

参考文献 (35)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ