書誌事項
- タイトル別名
-
- An Analysis of Actor-Critic Algorithms Using Eligibility Traces : Reinforcement Learning with Imperfect Value Functions
- Actor ニ テキセイド ノ リレキ オ モチイタ Actor Critic アルゴリズム フカンゼン ナ Value Function ノ モト デ ノ キョウカ ガクシュウ
この論文をさがす
説明
<p>We present an analysis of actor-critic algorithms, in which the actor updates its policy using eligibility traces of the policy parameters. Most of the theoretical results for eligibility traces have been for only critic's value iteration algorithms. This paper investigates what the actor's eligibility trace does. The results show that the algorithm is an extension of Williams' REINFORCE algorithms for infinite horizon reinforcement tasks, and then the critic provides an appropriate reinforcement baseline for the actor. Thanks to the actor's eligibility trace, the actor improves its policy by using a gradient of actual return, not by using a gradient of the estimated return in the critic. It enables the agent to learn a fairly good policy under the condition that the approximated value function in the critic is hopelessly inaccurate for conventional actor-critic algorithms. Also, if an accurate value function is estimated by the critic, the actor's learning is dramatically accelerated in our test cases. The behavior of the algorithm is demonstrated through simulations of a linear quadratic control problem and a pole balancing problem.</p>
収録刊行物
-
- 人工知能
-
人工知能 15 (2), 267-275, 2000-03-01
一般社団法人 人工知能学会
- Tweet
詳細情報 詳細情報について
-
- CRID
- 1390848647556017024
-
- NII論文ID
- 110002808264
-
- NII書誌ID
- AN10067140
-
- ISSN
- 09128085
- 24358614
- 21882266
-
- NDL書誌ID
- 5297968
-
- 本文言語コード
- ja
-
- データソース種別
-
- JaLC
- NDLサーチ
- CiNii Articles
-
- 抄録ライセンスフラグ
- 使用不可