An improved upper bound on the expected regret of UCB-type policies for a matching-selection bandit problem

Search this article

Description

We improved an upper bound on the expected regret of a UCB-type policy LLR for a bandit problem that repeats the following rounds: a player selects a maximal matching on a complete bipartite graph K M , N and receives a reward for each component edge of the selected matching. Rewards are assumed to be generated independently of its previous rewards according to an unknown fixed distribution. Our upper bound is smaller than the best known result (Chen et?al., 2013) by a factor of ? ( M 2 / 3 ) .

Journal

References(9)*help

See more

Related Projects

See more

Report a problem

Back to top