H control of linear discrete-time systems: Off-policy reinforcement learning

Bahare Kiumarsi, Frank L. Lewis, Zhong Ping Jiang

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, a model-free solution to the H control of linear discrete-time systems is presented. The proposed approach employs off-policy reinforcement learning (RL) to solve the game algebraic Riccati equation online using measured data along the system trajectories. Like existing model-free RL algorithms, no knowledge of the system dynamics is required. However, the proposed method has two main advantages. First, the disturbance input does not need to be adjusted in a specific manner. This makes it more practical as the disturbance cannot be specified in most real-world applications. Second, there is no bias as a result of adding a probing noise to the control input to maintain persistence of excitation (PE) condition. Consequently, the convergence of the proposed algorithm is not affected by probing noise. An example of the H control for an F-16 aircraft is given. It is seen that the convergence of the new off-policy RL algorithm is insensitive to probing noise.

Original languageEnglish (US)
Pages (from-to)144-152
Number of pages9
JournalAutomatica
Volume78
DOIs
StatePublished - Apr 1 2017

Keywords

  • H control
  • Off-policy reinforcement learning
  • Optimal control

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'H control of linear discrete-time systems: Off-policy reinforcement learning'. Together they form a unique fingerprint.

Cite this