Swarm reinforcement learning methods for problems with continuous state-action space

説明

We recently proposed swarm reinforcement learning methods in which multiple sets of an agent and an environment are prepared and the agents learn not only by individually performing a usual reinforcement learning method but also by exchanging information among them. Q-learning method has been used as the individual learning in the methods, and they have been applied to a problem with discrete state-action space. In the real world, however, there are many problems which are formulated as ones with continuous state-action space. This paper proposes swarm reinforcement learning methods based on an actor-critic method in order to acquire optimal policies rapidly for problems with continuous state-action space. The proposed methods are applied to a biped robot control problem, and their performance is examined through numerical experiments.

収録刊行物

被引用文献 (1)*注記

もっと見る

参考文献 (11)*注記

もっと見る

関連プロジェクト

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ