TY - GEN
T1 - Oracle-Efficient Online Learning for Smoothed Adversaries
AU - Haghtalab, Nika
AU - Han, Yanjun
AU - Shetty, Abhishek
AU - Yang, Kunhe
N1 - Funding Information:
This work was supported in part by the National Science Foundation under grant CCF-2145898, a C3.AI Digital Transformation Institute grant, and Berkeley AI Research Commons grants. This work was partially done while authors were visitors at the Simons Institute for the Theory of Computing.
Publisher Copyright:
© 2022 Neural information processing systems foundation. All rights reserved.
PY - 2022
Y1 - 2022
N2 - We study the design of computationally efficient online learning algorithms under smoothed analysis. In this setting, at every step an adversary generates a sample from an adaptively chosen distribution whose density is upper bounded by 1/σ times the uniform density. Given access to an offline optimization (ERM) oracle, we give the first computationally efficient online algorithms whose sublinear regret depends only on the pseudo/VC dimension d of the class and the smoothness parameter σ. In particular, we achieve oracle-efficient regret bounds of (Equation presented) for learning real-valued functions and (Equation presented) for learning binary-valued functions. Our results establish that online learning is computationally as easy as offline learning, under the smoothed analysis framework. This contrasts the computational separation between online learning with worst-case adversaries and offline learning established by [HK16]. Our algorithms also achieve improved bounds for some settings with binary-valued functions and worst-case adversaries. These include an oracle-efficient algorithm with (Equation presented) regret that refines the earlier (Equation presented) bound of [DS16] for finite domains, and an oracle-efficient algorithm with (Equation presented) regret for the transductive setting.
AB - We study the design of computationally efficient online learning algorithms under smoothed analysis. In this setting, at every step an adversary generates a sample from an adaptively chosen distribution whose density is upper bounded by 1/σ times the uniform density. Given access to an offline optimization (ERM) oracle, we give the first computationally efficient online algorithms whose sublinear regret depends only on the pseudo/VC dimension d of the class and the smoothness parameter σ. In particular, we achieve oracle-efficient regret bounds of (Equation presented) for learning real-valued functions and (Equation presented) for learning binary-valued functions. Our results establish that online learning is computationally as easy as offline learning, under the smoothed analysis framework. This contrasts the computational separation between online learning with worst-case adversaries and offline learning established by [HK16]. Our algorithms also achieve improved bounds for some settings with binary-valued functions and worst-case adversaries. These include an oracle-efficient algorithm with (Equation presented) regret that refines the earlier (Equation presented) bound of [DS16] for finite domains, and an oracle-efficient algorithm with (Equation presented) regret for the transductive setting.
UR - http://www.scopus.com/inward/record.url?scp=85148977069&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85148977069&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85148977069
T3 - Advances in Neural Information Processing Systems
BT - Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
A2 - Koyejo, S.
A2 - Mohamed, S.
A2 - Agarwal, A.
A2 - Belgrave, D.
A2 - Cho, K.
A2 - Oh, A.
PB - Neural information processing systems foundation
T2 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
Y2 - 28 November 2022 through 9 December 2022
ER -