Skip to Main content Skip to Navigation

Bandits multi-armés avec rétroaction partielle

Abstract : The multi-armed bandit (MAB) problem is a mathematical formulation of the exploration-exploitation trade-off inherent to reinforcement learning, in which the learner chooses an action (symbolized by an arm) from a set of available actions in a sequence of trials in order to maximize their reward. In the classical MAB problem, the learner receives absolute bandit feedback i.e. it receives as feedback the reward of the arm it selects. In many practical situations however, different kind of feedback is more readily available. In this thesis, we study two of such kinds of feedbacks, namely, relative feedback and corrupt feedback.The main practical motivation behind relative feedback arises from the task of online ranker evaluation. This task involves choosing the optimal ranker from a finite set of rankers using only pairwise comparisons, while minimizing the comparisons between sub-optimal rankers. This is formalized by the MAB problem with relative feedback, in which the learner selects two arms instead of one and receives the preference feedback. We consider the adversarial formulation of this problem which circumvents the stationarity assumption over the mean rewards for the arms. We provide a lower bound on the performance measure for any algorithm for this problem. We also provide an algorithm called "Relative Exponential-weight algorithm for Exploration and Exploitation" with performance guarantees. We present a thorough empirical study on several information retrieval datasets that confirm the validity of these theoretical results.The motivating theme behind corrupt feedback is that the feedback the learner receives is a corrupted form of the corresponding reward of the selected arm. Practically such a feedback is available in the tasks of online advertising, recommender systems etc. We consider two goals for the MAB problem with corrupt feedback: best arm identification and exploration-exploitation. For both the goals, we provide lower bounds on the performance measures for any algorithm. We also provide various algorithms for these settings. The main contribution of this module is the algorithms "KLUCB-CF" and "Thompson Sampling-CF" which asymptotically attain the best possible performance. We present experimental results to demonstrate the performance of these algorithms. We also show how this problem setting can be used for the practical application of enforcing differential privacy.
Complete list of metadata

Cited literature [121 references]  Display  Hide  Download
Contributor : ABES STAR :  Contact
Submitted on : Thursday, September 27, 2018 - 11:59:32 AM
Last modification on : Saturday, June 4, 2022 - 3:25:10 AM
Long-term archiving on: : Friday, December 28, 2018 - 2:31:28 PM


Version validated by the jury (STAR)


  • HAL Id : tel-01882676, version 1



Pratik Gajane. Bandits multi-armés avec rétroaction partielle. Algorithme et structure de données [cs.DS]. Université Charles de Gaulle - Lille III, 2017. Français. ⟨NNT : 2017LIL30045⟩. ⟨tel-01882676⟩



Record views


Files downloads