Coordination of model-based and model-free reinforcement learning
-
- UCHIBE Eiji
- Advanced Telecommunications Research Institute International
Bibliographic Information
- Other Title
-
- モデルベース・モデルフリー強化学習の調停について
Abstract
<p>Reinforcement learning algorithms are categorized into model-based methods, which explicitly estimate an environmental model and a reward function, and model-free methods, which directly learn a policy from real or generated experiences. So far, we have proposed the asynchronous parallel reinforcement learning algorithm for training multiple model-free and model-based reinforcement learners. The experimental results show a simple algorithm can contribute to complex algorithms' learning. However, a learner was selected stochastically according to the value function, and therefore, learning mechanisms have not been discussed. In addition, several components such as state prediction and value prediction errors were not taken into account. In this study, we compare several adaptive coordination mechanisms. For example, we evaluate the coordination based on the value functions, state prediction and value prediction errors, weighted coordination, and learning the weights. Then, we discuss learning efficiency, the ability to follow the changes in the environment, and the perspective of neuroscience.</p>
Journal
-
- Proceedings of the Annual Conference of JSAI
-
Proceedings of the Annual Conference of JSAI JSAI2022 (0), 2M4OS19b03-2M4OS19b03, 2022
The Japanese Society for Artificial Intelligence
- Tweet
Details 詳細情報について
-
- CRID
- 1390855656055894784
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
-
- Abstract License Flag
- Disallowed