動的なマルチエージェント環境におけるモデルメディエータを利用したモデルベース強化学習

DOI Web Site 参考文献5件 オープンアクセス

書誌事項

タイトル別名
  • Model-Based Reinforcement Learning using Model Mediator in Dynamic Multi-Agent Environment

この論文をさがす

説明

<p>Centralised training and decentralised execution (CTDE) is one of the most effective approaches in multiagent reinforcement learning (MARL). However, these CTDE methods still require large amounts of interaction with the environment, even to reach the same performance as very simple heuristic-based algorithms. Although modelbased RL is a prominent approach to improve sample efficiency, its adaptation to a multi-agent setting combining existing CTDE methods has not been well studied in the literature. The few existing studies only consider settings with relaxed restrictions on the number of agents and observable range. In this paper, we consider CTDE settings where some information about each agent’s observations (e.g. each agent’s visibility, number of agents) are changed dynamically. In such a setting, the fundamental challenge is how to train models that accurately generate each agent’s observations with complex transitions in addition to the central state, and how to use it for sample efficient policy learning. We propose a multi-agent model based RL algorithm based on the novel model architecture consisting of global and local prediction models with model mediator. We evaluate our model-based RL approach applied to an existing CTDE method on challenging StarCraft II micromanagement tasks and show that it can learn an effective policy with fewer interactions with the environment.</p>

収録刊行物

参考文献 (5)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ