Towards context-aware learning for control: Balancing stability and model-learning error

Nitin Shyamkumar, Serkan Gugercin, Benjamin Peherstorfer

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Classical data-driven control typically follows the learn-then-stabilize scheme where first a model of the system of interest is identified from data and then a controller is constructed based on the learned model. However, learning a model from data is challenging since it can incur high training costs and the model quality critically depends on the available data. In this work, we address how well one needs to learn a model to derive a controller by formalizing the trade off between learning error and controller performance in the specific setting of robust ℋ∞ control. We propose a bound on the stability radius of a robust controller with respect to the error of the learned model. The proposed analysis suggests that tolerating an increased learning error leads to a small decrease in the performance objective of the controller. Numerical experiments with systems from aerospace engineering demonstrate that judiciously balancing learning error and control performance can indeed reduce the number of data points by one order of magnitude with less than 5% decrease in control performance as measured with the ℋ∞ stability radius.

Original languageEnglish (US)
Title of host publication2022 American Control Conference, ACC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)9781665451963
StatePublished - 2022
Event2022 American Control Conference, ACC 2022 - Atlanta, United States
Duration: Jun 8 2022Jun 10 2022

Publication series

NameProceedings of the American Control Conference
ISSN (Print)0743-1619


Conference2022 American Control Conference, ACC 2022
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Electrical and Electronic Engineering


Dive into the research topics of 'Towards context-aware learning for control: Balancing stability and model-learning error'. Together they form a unique fingerprint.

Cite this