Abstract
The Nelder-Mead algorithm, a longstanding direct search method for unconstrained optimization published in 1965, is designed to minimize a scalar-valued function f of n real variables using only function values, without any derivative information. Each Nelder-Mead iteration is associated with a nondegenerate simplex defined by n + 1 vertices and their function values; a typical iteration produces a new simplex by replacing the worst vertex by a new point. Despite the method's widespread use, theoretical results have been limited: for strictly convex objective functions of one variable with bounded level sets, the algorithm always converges to the minimizer; for such functions of two variables, the diameter of the simplex converges to zero but examples constructed by McKinnon show that the algorithm may converge to a nonminimizing point. This paper considers the restricted Nelder-Mead algorithm, a variant that does not allow expansion steps. In two dimensions we show that for any nondegenerate starting simplex and any twice-continuously differentiable function with positive definite Hessian and bounded level sets, the algorithm always converges to the minimizer. The proof is based on treating the method as a discrete dynamical system and relies on several techniques that are nonstandard in convergence proofs for unconstrained optimization.
Original language | English (US) |
---|---|
Pages (from-to) | 501-532 |
Number of pages | 32 |
Journal | SIAM Journal on Optimization |
Volume | 22 |
Issue number | 2 |
DOIs | |
State | Published - 2012 |
Keywords
- Derivative-free optimization
- Direct search methods
- Nelder-Mead method
- Nonderivative optimization
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Applied Mathematics