Approximate dynamic programming for stochastic systems with additive and multiplicative noise

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper studies the stochastic optimal control problem with additive and multiplicative noise via reinforcement learning (RL) and approximate/adaptive dynamic programming (ADP). Using It calculus, a policy iteration algorithm is derived in the presence of both additive and multiplicative noise. It is shown that the expectation of the approximated cost matrix is guaranteed to converge to the solution of certain algebraic Riccati equation that gives rise to the optimal cost value. Furthermore, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, the efficiency of the proposed ADP methodology is illustrated in a numerical example.

Original languageEnglish (US)
Title of host publication2011 IEEE International Symposium on Intelligent Control, ISIC 2011
Pages185-190
Number of pages6
DOIs
StatePublished - 2011
Event2011 IEEE International Symposium on Intelligent Control, ISIC 2011 - Denver, CO, United States
Duration: Sep 28 2011Sep 30 2011

Publication series

NameIEEE International Symposium on Intelligent Control - Proceedings

Other

Other2011 IEEE International Symposium on Intelligent Control, ISIC 2011
Country/TerritoryUnited States
CityDenver, CO
Period9/28/119/30/11

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Approximate dynamic programming for stochastic systems with additive and multiplicative noise'. Together they form a unique fingerprint.

Cite this