Corralling Stochastic Bandit Algorithms

Raman Arora, Teodor V. Marinov, Mehryar Mohri

Research output: Contribution to journalConference articlepeer-review


We study the problem of corralling stochastic bandit algorithms, that is combining multiple bandit algorithms designed for a stochastic environment, with the goal of devising a corralling algorithm that performs almost as well as the best base algorithm. We give two general algorithms for this setting, which we show benefit from favorable regret guarantees. We show that the regret of the corralling algorithms is no worse than that of the best algorithm containing the arm with the highest reward, and depends on the gap between the highest reward and other rewards.

Original languageEnglish (US)
Pages (from-to)2116-2124
Number of pages9
JournalProceedings of Machine Learning Research
StatePublished - 2021
Event24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021 - Virtual, Online, United States
Duration: Apr 13 2021Apr 15 2021

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability


Dive into the research topics of 'Corralling Stochastic Bandit Algorithms'. Together they form a unique fingerprint.

Cite this