Gradient sampling methods for nonsmooth optimization

James V. Burke, Frank E. Curtis, Adrian S. Lewis, Michael L. Overton, Lucas E.A. Simões

Research output: Chapter in Book/Report/Conference proceedingChapter


This article reviews the gradient sampling methodology for solving nonsmooth, nonconvex optimization problems. We state an intuitively straightforward gradient sampling algorithm and summarize its convergence properties. Throughout this discussion, we emphasize the simplicity of gradient sampling as an extension of the steepest descent method for minimizing smooth objectives. We provide an overview of various enhancements that have been proposed to improve practical performance, as well as an overview of several extensions that have been proposed in the literature, such as to solve constrained problems. We also clarify certain technical aspects of the analysis of gradient sampling algorithms, most notably related to the assumptions one needs to make about the set of points at which the objective is continuously differentiable. Finally, we discuss possible future research directions.

Original languageEnglish (US)
Title of host publicationNumerical Nonsmooth Optimization
Subtitle of host publicationState of the Art Algorithms
PublisherSpringer International Publishing
Number of pages25
ISBN (Electronic)9783030349103
ISBN (Print)9783030349097
StatePublished - Jan 1 2020

ASJC Scopus subject areas

  • Economics, Econometrics and Finance(all)
  • General Business, Management and Accounting
  • General Computer Science


Dive into the research topics of 'Gradient sampling methods for nonsmooth optimization'. Together they form a unique fingerprint.

Cite this