TY - JOUR
T1 - Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions
AU - Asl, Azam
AU - Overton, Michael L.
N1 - Funding Information:
Michael L. Overton was supported in part by National Science Foundation [grant number DMS-1620083].
Publisher Copyright:
© 2019, © 2019 Informa UK Limited, trading as Taylor & Francis Group.
PY - 2020/3/3
Y1 - 2020/3/3
N2 - It has long been known that the gradient (steepest descent) method may fail on non-smooth problems, but the examples that have appeared in the literature are either devised specifically to defeat a gradient or subgradient method with an exact line search or are unstable with respect to perturbation of the initial point. We give an analysis of the gradient method with steplengths satisfying the Armijo and Wolfe inexact line search conditions on the non-smooth convex function (Formula presented.). We show that if a is sufficiently large, satisfying a condition that depends only on the Armijo parameter, then, when the method is initiated at any point (Formula presented.) with (Formula presented.), the iterates converge to a point (Formula presented.) with (Formula presented.), although f is unbounded below. We also give conditions under which the iterates (Formula presented.), using a specific Armijo–Wolfe bracketing line search. Our experimental results demonstrate that our analysis is reasonably tight.
AB - It has long been known that the gradient (steepest descent) method may fail on non-smooth problems, but the examples that have appeared in the literature are either devised specifically to defeat a gradient or subgradient method with an exact line search or are unstable with respect to perturbation of the initial point. We give an analysis of the gradient method with steplengths satisfying the Armijo and Wolfe inexact line search conditions on the non-smooth convex function (Formula presented.). We show that if a is sufficiently large, satisfying a condition that depends only on the Armijo parameter, then, when the method is initiated at any point (Formula presented.) with (Formula presented.), the iterates converge to a point (Formula presented.) with (Formula presented.), although f is unbounded below. We also give conditions under which the iterates (Formula presented.), using a specific Armijo–Wolfe bracketing line search. Our experimental results demonstrate that our analysis is reasonably tight.
KW - Steepest descent method
KW - convex optimization
KW - non-smooth optimization
UR - http://www.scopus.com/inward/record.url?scp=85074030377&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074030377&partnerID=8YFLogxK
U2 - 10.1080/10556788.2019.1673388
DO - 10.1080/10556788.2019.1673388
M3 - Article
AN - SCOPUS:85074030377
SN - 1055-6788
VL - 35
SP - 223
EP - 242
JO - Optimization Methods and Software
JF - Optimization Methods and Software
IS - 2
ER -