Abstract
Deep neural networks, when optimized with sufficient data, provide accurate representations of high-dimensional functions; in contrast, function approximation techniques that have predominated in scientific computing do not scale well with dimensionality. As a result, many high-dimensional sampling and approximation problems once thought intractable are being revisited through the lens of machine learning. While the promise of unparalleled accuracy may suggest a renaissance for applications that require parameterizing representations of complex systems, in many applications gathering sufficient data to develop such a representation remains a significant challenge. Here we introduce an approach that combines rare events sampling techniques with neural network training to optimize objective functions that are dominated by rare events. We show that importance sampling reduces the asymptotic variance of the solution to a learning problem, suggesting benefits for generalization. We study our algorithm in the context of solving high-dimensional PDEs that admit a variational formulation, a problem with applications in statistical physics and implications in machine learning theory. Our numerical experiments demonstrate that we can successfully learn even with the compounding difficulties of high-dimension and rare data.
Original language | English (US) |
---|---|
Pages (from-to) | 757-780 |
Number of pages | 24 |
Journal | Proceedings of Machine Learning Research |
Volume | 145 |
State | Published - 2021 |
Event | 2nd Mathematical and Scientific Machine Learning Conference, MSML 2021 - Virtual, Online Duration: Aug 16 2021 → Aug 19 2021 |
Keywords
- Backward Kolmogorov Equation
- Importance Sampling
- Partial Differential Equations
- Rare Events
- Variational Monte Carlo
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability