TY - JOUR
T1 - Adapting to misspecification in contextual bandits
AU - Foster, Dylan J.
AU - Gentile, Claudio
AU - Mohri, Mehryar
AU - Zimmert, Julian
N1 - Funding Information:
DF acknowledges the support of NSF TRIPODS grant #1740751. We thank Teodor Marinov and Alexander Rakhlin for discussions on related topics.
Publisher Copyright:
© 2020 Neural information processing systems foundation. All rights reserved.
PY - 2020
Y1 - 2020
N2 - A major research direction in contextual bandits is to develop algorithms that are computationally efficient, yet support flexible, general-purpose function approximation. Algorithms based on modeling rewards have shown strong empirical performance, yet typically require a well-specified model, and can fail when this assumption does not hold. Can we design algorithms that are efficient and flexible, yet degrade gracefully in the face of model misspecification? We introduce a new family of oracle-efficient algorithms for e-misspecified contextual bandits that adapt to unknown model misspecification—both for finite and infinite action settings. Given access to an online oracle for square loss regression, our algorithm attains optimal regret and—in particular—optimal dependence on the misspecification level, with no prior knowledge. Specializing to linear contextual bandits with infinite actions in d dimensions, we obtain the first algorithm that achieves the optimal Õ(dvT + evdT) regret bound for unknown e. On a conceptual level, our results are enabled by a new optimization-based perspective on the regression oracle reduction framework of Foster and Rakhlin [20], which we believe will be useful more broadly.
AB - A major research direction in contextual bandits is to develop algorithms that are computationally efficient, yet support flexible, general-purpose function approximation. Algorithms based on modeling rewards have shown strong empirical performance, yet typically require a well-specified model, and can fail when this assumption does not hold. Can we design algorithms that are efficient and flexible, yet degrade gracefully in the face of model misspecification? We introduce a new family of oracle-efficient algorithms for e-misspecified contextual bandits that adapt to unknown model misspecification—both for finite and infinite action settings. Given access to an online oracle for square loss regression, our algorithm attains optimal regret and—in particular—optimal dependence on the misspecification level, with no prior knowledge. Specializing to linear contextual bandits with infinite actions in d dimensions, we obtain the first algorithm that achieves the optimal Õ(dvT + evdT) regret bound for unknown e. On a conceptual level, our results are enabled by a new optimization-based perspective on the regression oracle reduction framework of Foster and Rakhlin [20], which we believe will be useful more broadly.
UR - http://www.scopus.com/inward/record.url?scp=85106136575&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85106136575&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85106136575
SN - 1049-5258
VL - 2020-December
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 34th Conference on Neural Information Processing Systems, NeurIPS 2020
Y2 - 6 December 2020 through 12 December 2020
ER -