Value iteration and adaptive dynamic programming for data-driven adaptive optimal control design

Research output: Contribution to journalArticle


This paper presents a novel non-model-based, data-driven adaptive optimal controller design for linear continuous-time systems with completely unknown dynamics. Inspired by the stochastic approximation theory, a continuous-time version of the traditional value iteration (VI) algorithm is presented with rigorous convergence analysis. This VI method is crucial for developing new adaptive dynamic programming methods to solve the adaptive optimal control problem and the stochastic robust optimal control problem for linear continuous-time systems. Fundamentally different from existing results, the a priori knowledge of an initial admissible control policy is no longer required. The efficacy of the proposed methodology is illustrated by two examples and a brief comparative study between VI and earlier policy-iteration methods.

Original languageEnglish (US)
Pages (from-to)348-360
Number of pages13
StatePublished - Sep 1 2016



  • Adaptive control
  • Adaptive dynamic programming
  • Optimal control
  • Stochastic approximation
  • Value iteration

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Cite this