Abstract
Bayesian optimization is popular for optimizing time-consuming black-box objectives. Nonetheless, for hyperparameter tuning in deep neural networks, the time required to evaluate the validation error for even a few hyperparameter settings remains a bottleneck. Multi-fidelity optimization promises relief using cheaper proxies to such objectives — for example, validation error for a network trained using a subset of the training points or fewer iterations than required for convergence. We propose a highly flexible and practical approach to multi-fidelity Bayesian optimization, focused on efficiently optimizing hyperparameters for iteratively trained supervised learning models. We introduce a new acquisition function, the trace-aware knowledge-gradient, which efficiently leverages both multiple continuous fidelity controls and trace observations — values of the objective at a sequence of fidelities, available when varying fidelity using training iterations. We provide a provably convergent method for optimizing our acquisition function and show it outperforms state-of-the-art alternatives for hyperparameter tuning of deep neural networks and large-scale kernel learning.
Original language | English (US) |
---|---|
State | Published - Jan 1 2019 |
Event | 35th Conference on Uncertainty in Artificial Intelligence, UAI 2019 - Tel Aviv, Israel Duration: Jul 22 2019 → Jul 25 2019 |
Conference
Conference | 35th Conference on Uncertainty in Artificial Intelligence, UAI 2019 |
---|---|
Country/Territory | Israel |
City | Tel Aviv |
Period | 7/22/19 → 7/25/19 |
ASJC Scopus subject areas
- Artificial Intelligence