Optimal policies for controlled Markov chains with a constraint

Frederick J. Beutler, Keith W. Ross

    Research output: Contribution to journalArticlepeer-review

    Abstract

    The time average reward for a discrete-time controlled Markov process subject to a time-average cost constraint is maximized over the class of al causal policies. Each epoch, a reward depending on the state and action, is earned, and a similarly constituted cost is assessed; the time average of the former is maximized, subject to a hard limit on the time average of the latter. It is assumed that the state space is finite, and the action space compact metric. An accessibility hypothesis makes it possible to utilize a Lagrange multiplier formulation involving the dynamic programming equation, thus reducing the optimization problem to an unconstrained optimization parametrized by the multiplier. The parametrized dynamic programming equation possesses compactness and convergence properties that lead to the following: If the constraint can be satisfied by any causal policy, the supremum over time-average rewards respective to all causal policies is attained by either a simple or a mixed policy; the latter is equivalent to choosing independently at each epoch between two specified simple policies by the throw of a biased coin.

    Original languageEnglish (US)
    Pages (from-to)236-252
    Number of pages17
    JournalJournal of Mathematical Analysis and Applications
    Volume112
    Issue number1
    DOIs
    StatePublished - Nov 15 1985

    ASJC Scopus subject areas

    • Analysis
    • Applied Mathematics

    Fingerprint

    Dive into the research topics of 'Optimal policies for controlled Markov chains with a constraint'. Together they form a unique fingerprint.

    Cite this