Abstract
This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using It calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.
Original language | English (US) |
---|---|
Article number | 6026952 |
Pages (from-to) | 2392-2398 |
Number of pages | 7 |
Journal | IEEE Transactions on Neural Networks |
Volume | 22 |
Issue number | 12 PART 2 |
DOIs | |
State | Published - Dec 2011 |
Keywords
- Approximate dynamic programming
- control-dependent noise
- optimal stationary control
- stochastic systems
ASJC Scopus subject areas
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence