Reinforcement Learning for Adaptive Periodic Linear Quadratic Control

Bo Pang, Zhong Ping Jiang, Iven Mareels

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents a first solution to the problem of adaptive LQR for continuous-time linear periodic systems. Specifically, reinforcement learning and adaptive dynamic programming (ADP) techniques are used to develop two algorithms to obtain near-optimal controllers. Firstly, the policy iteration (PI) and value iteration (VI) methods are proposed when the model is known. Then, PI-based and VI-based off-policy ADP algorithms are derived to find near-optimal solutions directly from input/state data collected along the system trajectories, without the exact knowledge of system dynamics. The effectiveness of the derived algorithms is validated using the well-known lossy Mathieu equation.

Original languageEnglish (US)
Title of host publication2019 IEEE 58th Conference on Decision and Control, CDC 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3322-3327
Number of pages6
ISBN (Electronic)9781728113982
DOIs
StatePublished - Dec 2019
Event58th IEEE Conference on Decision and Control, CDC 2019 - Nice, France
Duration: Dec 11 2019Dec 13 2019

Publication series

NameProceedings of the IEEE Conference on Decision and Control
Volume2019-December
ISSN (Print)0743-1546
ISSN (Electronic)2576-2370

Conference

Conference58th IEEE Conference on Decision and Control, CDC 2019
Country/TerritoryFrance
CityNice
Period12/11/1912/13/19

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Reinforcement Learning for Adaptive Periodic Linear Quadratic Control'. Together they form a unique fingerprint.

Cite this