TY - GEN
T1 - Distributed Online Non-convex Optimization with Composite Regret
AU - Jiang, Zhanhong
AU - Balu, Aditya
AU - Lee, Xian Yeow
AU - Lee, Young M.
AU - Hegde, Chinmay
AU - Sarkar, Soumik
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Regret has been widely adopted as the metric of choice for evaluating the performance of online optimization algorithms for distributed, multi-agent systems. However, variations in data or model associated with each agent can significantly impact decisions, and requires consensus among agents. Moreover, most existing works have focused on developing approaches for (strongly) convex losses, and very few results have been obtained in terms of regret bounds in distributed online optimization for general non-convex losses. To address these two issues, we propose a novel composite regret with a new network-based metric to evaluate distributed online optimization algorithms. We concretely define static and dynamic forms of the composite regret. By leveraging the dynamic form of our composite regret, we develop a consensus-based online normalized gradient (CONGD) approach for pseudo-convex losses and then show a sublinear behavior for CONGD relating to a regularity term for the path variation of the optimizer. For general non-convex losses, we first explore the regrets defined based on the recent advances such that no deterministic algorithm can achieve the sublinear regret. We then develop the distributed online non-convex optimization with composite regret (DINOCO) without access to the gradients, depending on an offline optimization oracle. We show that DINOCO can achieve sublinear regret; to our knowledge, this is the first regret bound for general distributed online non-convex learning.
AB - Regret has been widely adopted as the metric of choice for evaluating the performance of online optimization algorithms for distributed, multi-agent systems. However, variations in data or model associated with each agent can significantly impact decisions, and requires consensus among agents. Moreover, most existing works have focused on developing approaches for (strongly) convex losses, and very few results have been obtained in terms of regret bounds in distributed online optimization for general non-convex losses. To address these two issues, we propose a novel composite regret with a new network-based metric to evaluate distributed online optimization algorithms. We concretely define static and dynamic forms of the composite regret. By leveraging the dynamic form of our composite regret, we develop a consensus-based online normalized gradient (CONGD) approach for pseudo-convex losses and then show a sublinear behavior for CONGD relating to a regularity term for the path variation of the optimizer. For general non-convex losses, we first explore the regrets defined based on the recent advances such that no deterministic algorithm can achieve the sublinear regret. We then develop the distributed online non-convex optimization with composite regret (DINOCO) without access to the gradients, depending on an offline optimization oracle. We show that DINOCO can achieve sublinear regret; to our knowledge, this is the first regret bound for general distributed online non-convex learning.
KW - Composite regret
KW - distributed optimization
KW - non-convex optimization
KW - online optimization
UR - http://www.scopus.com/inward/record.url?scp=85142606457&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85142606457&partnerID=8YFLogxK
U2 - 10.1109/Allerton49937.2022.9929356
DO - 10.1109/Allerton49937.2022.9929356
M3 - Conference contribution
AN - SCOPUS:85142606457
T3 - 2022 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022
BT - 2022 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022
Y2 - 27 September 2022 through 30 September 2022
ER -