Imitation learning from unsegmented human motion based on N-gram statistics of linear prediction models

Bibliographic Information

Other Title
  • 複数予測モデル遷移のN‐gram統計に基づく非分節運動系列からの模倣学習手法
  • 複数予測モデル遷移のN-gram統計に基づく非分節運動系列からの模倣学習手法
  • フクスウ ヨソク モデル センイ ノ N gram トウケイ ニ モトズク ヒブンセツ ウンドウ ケイレツ カラ ノ モホウ ガクシュウ シュホウ

Search this article

Abstract

This paper presents an imitation learning method, which enables an autonomous robot to extract demonstrator's characteristic motions by observing unsegmented human motions. To imitate another's motions through unsegmented interaction, the robot has to find what he learns from the continuous time series. The learning architecture is developed mainly based on a switching autoregressive model (SARM), a keyword extraction method based on minimum description length principle, and singular vector decomposition to reduce dimensionality of high dimensional human bodily motion. In most previous research on methods of robotic imitation learning, target motions that were given to robots were segmented into several meaningful parts by the experimenters in advance. However, to imitate certain behaviors from the continuous motion of a person, the robot needs to find segments that should be learned. To achieve this goal, the learning architecture converts the continuous time series into a discrete time series of letters by using SARM after reducing its dimensionality by using SVD. After the conversion, the proposed method finds characteristic motions by utilizing n-gram statistics referring to description length. In our experiment, a demonstrator displayed several unsegmented motions to a robot. The results revealed that the framework enabled the robot to obtain several prepared characteristic human motions.

Journal

Citations (3)*help

See more

References(40)*help

See more

Details 詳細情報について

Report a problem

Back to top