Delay-Adaptive Learning in Generalized Linear Contextual Bandits

Jose Blanchet, Renyuan Xu, Zhengyuan Zhou

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we consider online learning in generalized linear contextual bandits where rewards are not immediately observed. Instead, rewards are available to the decision maker only after some delay, which is unknown and stochastic. Such delayed feedback occurs in several active learning settings, including product recommendation, personalized medical treatment selection, bidding in first-price auctions, and bond trading in over-the-counter markets. We study the performance of two well-known algorithms adapted to this delayed setting: one based on upper confidence bounds and the other based on Thompson sampling. We describe modifications on how these two algorithms should be adapted to handle delays and give regret characterizations for both algorithms. To the best of our knowledge, our regret bounds provide the first theoretical characterizations in generalized linear contextual bandits with large delays. Our results contribute to the broad landscape of contextual bandits literature by establishing that both algorithms can be made to be robust to delays, thereby helping clarify and reaffirm the empirical success of these two algorithms, which are widely deployed in modern recommendation engines.

Original languageEnglish (US)
Pages (from-to)326-345
Number of pages20
JournalMathematics of Operations Research
Volume49
Issue number1
DOIs
StatePublished - Feb 2024

Keywords

  • contextual bandits
  • delayed feedback
  • generalized linear model
  • MLE

ASJC Scopus subject areas

  • General Mathematics
  • Computer Science Applications
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'Delay-Adaptive Learning in Generalized Linear Contextual Bandits'. Together they form a unique fingerprint.

Cite this