Stochastic Adaptive Dynamic Programming for Robust Optimal Control Design

T. Bian, Z. P. Jiang

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

We present preliminary results on the development of a stochastic adaptive dynamic programming theory for the data-driven, non-model-based design of robust optimal controllers for continuous-time stochastic systems. Both multiplicative noise and additive noise are considered. Two types of optimal control problems-the discounted problem and the biased problem-are investigated. Reinforcement learning and adaptive dynamic programming techniques are employed to design stochastic adaptive optimal controllers through online successive approximations of optimal solutions. Rigorous convergence proofs along with stability analysis are provided. The effectiveness of the proposed methods is validated by three illustrative practical examples arising from biological motor control and vehicle suspension control.

Original languageEnglish (US)
Title of host publicationControl of Complex Systems
Subtitle of host publicationTheory and Applications
PublisherElsevier Inc.
Pages211-245
Number of pages35
ISBN (Electronic)9780128054376
ISBN (Print)9780128052464
DOIs
StatePublished - Jul 23 2016

Keywords

  • Dynamic programming
  • Robust adaptive dynamic programming (RADP)
  • Stochastic optimal control
  • Stochastic systems

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'Stochastic Adaptive Dynamic Programming for Robust Optimal Control Design'. Together they form a unique fingerprint.

  • Cite this

    Bian, T., & Jiang, Z. P. (2016). Stochastic Adaptive Dynamic Programming for Robust Optimal Control Design. In Control of Complex Systems: Theory and Applications (pp. 211-245). Elsevier Inc.. https://doi.org/10.1016/B978-0-12-805246-4.00007-0