TY - GEN
T1 - SUCAG
T2 - 57th IEEE Conference on Decision and Control, CDC 2018
AU - Wai, Hoi To
AU - Freris, Nikolaos M.
AU - Nedić, Angelia
AU - Scaglione, Anna
N1 - Funding Information:
Acknowledgements. This work was supported by NSF under grants NSF CCF-BSF 1714672, CCF-1717207 and CCF-1717391. The authors would like to thank Prof. Maxim Raginsky for pointing out the reference [29].
Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods.
AB - We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods.
KW - - Distributed optimization
KW - Asynchronous algorithms
KW - Incremental methods
KW - Machine learning
KW - Multiagent systems
KW - Randomized algorithms
UR - http://www.scopus.com/inward/record.url?scp=85062171498&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85062171498&partnerID=8YFLogxK
U2 - 10.1109/CDC.2018.8619336
DO - 10.1109/CDC.2018.8619336
M3 - Conference contribution
AN - SCOPUS:85062171498
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 1751
EP - 1756
BT - 2018 IEEE Conference on Decision and Control, CDC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 17 December 2018 through 19 December 2018
ER -