Abstract
We study the problem of (provably) learning the weights of a two-layer neural network with quadratic activations. In particular, we focus on the under-parametrized regime where the number of neurons in the hidden layer is (much) smaller than the dimension of the input. Our approach uses a lifting trick, which enables us to borrow algorithmic ideas from low-rank matrix estimation. In this context, we propose two novel, nonconvex training algorithms which do not need any extra tuning parameters other than the number of hidden neurons. We support our algorithms with rigorous theoretical analysis, and show that the proposed algorithms enjoy linear convergence, fast running time per iteration, and near-optimal sample complexity. Finally, we complement our theoretical results with several numerical experiments.
Original language | English (US) |
---|---|
Pages | 1417-1426 |
Number of pages | 10 |
State | Published - 2018 |
Event | 21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018 - Playa Blanca, Lanzarote, Canary Islands, Spain Duration: Apr 9 2018 → Apr 11 2018 |
Conference
Conference | 21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018 |
---|---|
Country/Territory | Spain |
City | Playa Blanca, Lanzarote, Canary Islands |
Period | 4/9/18 → 4/11/18 |
ASJC Scopus subject areas
- Statistics and Probability
- Artificial Intelligence