Abstract
This monograph presents a new framework for learningbased control synthesis of continuous-time dynamical systems with unknown dynamics. The new design paradigm proposed here is fundamentally different from traditional control theory. In the classical paradigm, controllers are often designed for a given class of dynamical control systems; it is a model-based design. Under the learning-based control framework, controllers are learned online from realtime input-output data collected along the trajectories of the control system in question. An entanglement of techniques from reinforcement learning and model-based control theory is advocated to find a sequence of suboptimal controllers that converge to the optimal solution as learning steps increase. On the one hand, this learning-based design approach attempts to overcome the well-known "curse of dimensionality" and the "curse of modeling" associated with Bellman's Dynamic Programming. On the other hand, rigorous stability and robustness analysis can be derived for the closed-loop system with real-time learning-based controllers. The effectiveness of the proposed learning-based control framework is demonstrated via its applications to theoretical optimal control problems tied to various important classes of continuous-time dynamical systems and practical problems arising from biological motor control, connected and autonomous vehicles.
Original language | English (US) |
---|---|
Pages (from-to) | 176-284 |
Number of pages | 109 |
Journal | Foundations and Trends in Systems and Control |
Volume | 8 |
Issue number | 3 |
DOIs | |
State | Published - Dec 8 2020 |
ASJC Scopus subject areas
- Control and Systems Engineering
- Control and Optimization