Neural Galerkin schemes for sequential-in-time solving of partial differential equations with deep networks

Jules Berman, Paul Schwerdtner, Benjamin Peherstorfer

Research output: Contribution to journalArticlepeer-review

Abstract

This survey discusses Neural Galerkin schemes that leverage nonlinear parametrizations such as deep networks to numerically solve time-dependent partial differential equations (PDEs) in a variational sense. Neural Galerkin schemes build on the Dirac-Frenkel variational principle to train networks by minimizing the residual sequentially over time, which is in contrast to many other methods that approximate PDE solution fields with deep networks globally in time. Because of the sequential-in-time training, Neural Galerkin solutions inherently respect causality and approximate solution fields locally in time so that often fewer parameters are required than by global-in-time methods. Additionally, the sequential-in-time training enables adaptively sampling the spatial domain to efficiently evaluate the residual objectives over time, which is key for numerically realizing the expressive power of deep networks and other nonlinear parametrizations in high dimensions and when solution features are local such as wave fronts.

Original languageEnglish (US)
JournalHandbook of Numerical Analysis
DOIs
StateAccepted/In press - 2024

Keywords

  • Curse of dimensionality
  • Deep networks
  • Dirac-Frenkel variational principle
  • Kolmogorov n-width
  • Model reduction
  • Time-dependent partial differential equations

ASJC Scopus subject areas

  • Numerical Analysis
  • Modeling and Simulation
  • Computational Mathematics
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Neural Galerkin schemes for sequential-in-time solving of partial differential equations with deep networks'. Together they form a unique fingerprint.

Cite this