Coordination of model-based and model-free reinforcement learning

DOI
  • UCHIBE Eiji
    Advanced Telecommunications Research Institute International

Bibliographic Information

Other Title
  • モデルベース・モデルフリー強化学習の調停について

Abstract

<p>Reinforcement learning algorithms are categorized into model-based methods, which explicitly estimate an environmental model and a reward function, and model-free methods, which directly learn a policy from real or generated experiences. So far, we have proposed the asynchronous parallel reinforcement learning algorithm for training multiple model-free and model-based reinforcement learners. The experimental results show a simple algorithm can contribute to complex algorithms' learning. However, a learner was selected stochastically according to the value function, and therefore, learning mechanisms have not been discussed. In addition, several components such as state prediction and value prediction errors were not taken into account. In this study, we compare several adaptive coordination mechanisms. For example, we evaluate the coordination based on the value functions, state prediction and value prediction errors, weighted coordination, and learning the weights. Then, we discuss learning efficiency, the ability to follow the changes in the environment, and the perspective of neuroscience.</p>

Journal

Details 詳細情報について

Report a problem

Back to top