TY - GEN
T1 - Towards context-aware learning for control
T2 - 2022 American Control Conference, ACC 2022
AU - Shyamkumar, Nitin
AU - Gugercin, Serkan
AU - Peherstorfer, Benjamin
N1 - Funding Information:
The first and third author acknowledge partial support of the NSF under Grant No. 2012250. The third author acknowledges support from the AFOSR award FA9550-21-1-0222 (Dr. Fariba Fahroo). The second author acknowledge the partial support of of the NSF under Grant No. 1819110.
Publisher Copyright:
© 2022 American Automatic Control Council.
PY - 2022
Y1 - 2022
N2 - Classical data-driven control typically follows the learn-then-stabilize scheme where first a model of the system of interest is identified from data and then a controller is constructed based on the learned model. However, learning a model from data is challenging since it can incur high training costs and the model quality critically depends on the available data. In this work, we address how well one needs to learn a model to derive a controller by formalizing the trade off between learning error and controller performance in the specific setting of robust ℋ∞ control. We propose a bound on the stability radius of a robust controller with respect to the error of the learned model. The proposed analysis suggests that tolerating an increased learning error leads to a small decrease in the performance objective of the controller. Numerical experiments with systems from aerospace engineering demonstrate that judiciously balancing learning error and control performance can indeed reduce the number of data points by one order of magnitude with less than 5% decrease in control performance as measured with the ℋ∞ stability radius.
AB - Classical data-driven control typically follows the learn-then-stabilize scheme where first a model of the system of interest is identified from data and then a controller is constructed based on the learned model. However, learning a model from data is challenging since it can incur high training costs and the model quality critically depends on the available data. In this work, we address how well one needs to learn a model to derive a controller by formalizing the trade off between learning error and controller performance in the specific setting of robust ℋ∞ control. We propose a bound on the stability radius of a robust controller with respect to the error of the learned model. The proposed analysis suggests that tolerating an increased learning error leads to a small decrease in the performance objective of the controller. Numerical experiments with systems from aerospace engineering demonstrate that judiciously balancing learning error and control performance can indeed reduce the number of data points by one order of magnitude with less than 5% decrease in control performance as measured with the ℋ∞ stability radius.
UR - http://www.scopus.com/inward/record.url?scp=85138489921&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85138489921&partnerID=8YFLogxK
U2 - 10.23919/ACC53348.2022.9867770
DO - 10.23919/ACC53348.2022.9867770
M3 - Conference contribution
AN - SCOPUS:85138489921
T3 - Proceedings of the American Control Conference
SP - 4808
EP - 4813
BT - 2022 American Control Conference, ACC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 8 June 2022 through 10 June 2022
ER -