Abstract
We present preliminary results on the development of a stochastic adaptive dynamic programming theory for the data-driven, non-model-based design of robust optimal controllers for continuous-time stochastic systems. Both multiplicative noise and additive noise are considered. Two types of optimal control problems-the discounted problem and the biased problem-are investigated. Reinforcement learning and adaptive dynamic programming techniques are employed to design stochastic adaptive optimal controllers through online successive approximations of optimal solutions. Rigorous convergence proofs along with stability analysis are provided. The effectiveness of the proposed methods is validated by three illustrative practical examples arising from biological motor control and vehicle suspension control.
Original language | English (US) |
---|---|
Title of host publication | Control of Complex Systems |
Subtitle of host publication | Theory and Applications |
Publisher | Elsevier Inc. |
Pages | 211-245 |
Number of pages | 35 |
ISBN (Electronic) | 9780128054376 |
ISBN (Print) | 9780128052464 |
DOIs | |
State | Published - Jul 23 2016 |
Keywords
- Dynamic programming
- Robust adaptive dynamic programming (RADP)
- Stochastic optimal control
- Stochastic systems
ASJC Scopus subject areas
- General Engineering