Abstract
This article reviews the gradient sampling methodology for solving nonsmooth, nonconvex optimization problems. We state an intuitively straightforward gradient sampling algorithm and summarize its convergence properties. Throughout this discussion, we emphasize the simplicity of gradient sampling as an extension of the steepest descent method for minimizing smooth objectives. We provide an overview of various enhancements that have been proposed to improve practical performance, as well as an overview of several extensions that have been proposed in the literature, such as to solve constrained problems. We also clarify certain technical aspects of the analysis of gradient sampling algorithms, most notably related to the assumptions one needs to make about the set of points at which the objective is continuously differentiable. Finally, we discuss possible future research directions.
Original language | English (US) |
---|---|
Title of host publication | Numerical Nonsmooth Optimization |
Subtitle of host publication | State of the Art Algorithms |
Publisher | Springer International Publishing |
Pages | 201-225 |
Number of pages | 25 |
ISBN (Electronic) | 9783030349103 |
ISBN (Print) | 9783030349097 |
DOIs | |
State | Published - Jan 1 2020 |
ASJC Scopus subject areas
- Economics, Econometrics and Finance(all)
- General Business, Management and Accounting
- General Computer Science