Cognitive Satisficing Exploration in Dueling Bandit Problems


Bibliographic Information

Other Title
  • 比較バンディット問題における認知的満足化探索


<p>Multi-armed bandit problems, which are the most fundamental tasks in reinforcement learning, have been widely applied to a range of problems such as online advertisement delivery and game tree search. In contrast to these traditional bandit problems that require absolute rewards to be quantifiable, dueling bandit problems (DBP) can deal with relative rewards by pairwise comparisons. In DBP, one of the most effective solutions is Double Thompson Sampling (D-TS). However, due to the pairwise comparisons, solving DBP requires many trials and errors, and that causes D-TS to do a lot of computation. In this paper, we focus on the fact that “satisficing” action selection leads to quick search for an action that satisfies a certain target level. We propose an algorithm that is based on Risk-sensitive Satisficing (RS) model. The result showed that there are some datasets on which its performance was inferior to D-TS’s. However, we propose a new method combining RS and T-DS that improves the performance for weak regret in DBP.</p>


Details 詳細情報について

Report a problem

Back to top