Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning

Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, Andrew Gordon Wilson

Research output: Contribution to journalConference articlepeer-review

Abstract

Bayesian optimization is popular for optimizing time-consuming black-box objectives. Nonetheless, for hyperparameter tuning in deep neural networks, the time required to evaluate the validation error for even a few hyperparameter settings remains a bottleneck. Multi-fidelity optimization promises relief using cheaper proxies to such objectives — for example, validation error for a network trained using a subset of the training points or fewer iterations than required for convergence. We propose a highly flexible and practical approach to multi-fidelity Bayesian optimization, focused on efficiently optimizing hyperparameters for iteratively trained supervised learning models. We introduce a new acquisition function, the trace-aware knowledge-gradient, which efficiently leverages both multiple continuous fidelity controls and trace observations — values of the objective at a sequence of fidelities, available when varying fidelity using training iterations. We provide a provably convergent method for optimizing our acquisition function and show it outperforms state-of-the-art alternatives for hyperparameter tuning of deep neural networks and large-scale kernel learning.

Original languageEnglish (US)
Pages (from-to)788-798
Number of pages11
JournalProceedings of Machine Learning Research
Volume115
StatePublished - 2019
Event35th Uncertainty in Artificial Intelligence Conference, UAI 2019 - Tel Aviv, Israel
Duration: Jul 22 2019Jul 25 2019

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning'. Together they form a unique fingerprint.

Cite this