TY - GEN
T1 - Quasi-Optimal Sampling to Learn Basis Updates for Online Adaptive Model Reduction with Adaptive Empirical Interpolation
AU - Cortinovis, Alice
AU - Kressner, Daniel
AU - Massei, Stefano
AU - Peherstorfer, Benjamin
N1 - Funding Information:
2Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012, USA. [email protected] *The work of the first and third author has been supported by the SNSF research project Fast algorithms from low-rank updates, grant number: 200020 178806. The fourth author was partially supported by the Air Force Center of Excellence on Multi-Fidelity Modeling of Rocket Combustor Dynamics, Award Number FA9550-17-1-0195 and the AFOSR MURI on multi-information sources of multi-physics systems under Award Number FA9550-15-1-0038. The numerical experiments were computed with support through the NYU IT High Performance Computing resources, services, and staff expertise.
Publisher Copyright:
© 2020 AACC.
PY - 2020/7
Y1 - 2020/7
N2 - Traditional model reduction derives reduced models from large-scale systems in a one-time computationally expensive offline (training) phase and then evaluates reduced models in an online phase to rapidly predict system outputs; however, this offline/online splitting means that reduced models can be expected to faithfully predict outputs only for system behavior that has been incorporated into the reduced models during the offline phase. This work considers model reduction with the online adaptive empirical interpolation method (AADEIM) that adapts reduced models in the online phase to system behavior that was not anticipated in the offline phase by deriving updates from a few samples of the states of the large-scale systems. The contribution of this work is an analysis of the AADEIM sampling strategy for deciding which parts of the large-scale states to sample to learn reduced-model updates. The analysis shows that the AADEIM sampling strategy is optimal up to a factor 2. Numerical results demonstrate the theoretical results by comparing the quasi-optimal AADEIM sampling strategy to other sampling strategies on various examples.
AB - Traditional model reduction derives reduced models from large-scale systems in a one-time computationally expensive offline (training) phase and then evaluates reduced models in an online phase to rapidly predict system outputs; however, this offline/online splitting means that reduced models can be expected to faithfully predict outputs only for system behavior that has been incorporated into the reduced models during the offline phase. This work considers model reduction with the online adaptive empirical interpolation method (AADEIM) that adapts reduced models in the online phase to system behavior that was not anticipated in the offline phase by deriving updates from a few samples of the states of the large-scale systems. The contribution of this work is an analysis of the AADEIM sampling strategy for deciding which parts of the large-scale states to sample to learn reduced-model updates. The analysis shows that the AADEIM sampling strategy is optimal up to a factor 2. Numerical results demonstrate the theoretical results by comparing the quasi-optimal AADEIM sampling strategy to other sampling strategies on various examples.
UR - http://www.scopus.com/inward/record.url?scp=85089592771&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089592771&partnerID=8YFLogxK
U2 - 10.23919/ACC45564.2020.9147832
DO - 10.23919/ACC45564.2020.9147832
M3 - Conference contribution
AN - SCOPUS:85089592771
T3 - Proceedings of the American Control Conference
SP - 2472
EP - 2477
BT - 2020 American Control Conference, ACC 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 American Control Conference, ACC 2020
Y2 - 1 July 2020 through 3 July 2020
ER -