Randomized and past-dependent policies for Markov decision processes with multiple constraints

Keith W. Ross

    Research output: Contribution to journalArticlepeer-review

    Abstract

    The Markov decision problem of locating a policy to maximize the long-run average reward subject to K long-run average cost constraints is considered. It is assumed that the state and action spaces are finite and the law of motion is unichain, that is, every pure policy gives rise to a Markov chain with one recurrent class. It is first proved that there exists an optimal stationary policy with a degree of randomization no greater than K; consequently, it is never necessary to randomize in more than K state. A linear program produces the optimal policy with limited randomization. For the special case of a single constraint, we also address the problem of finding optimal nonrandomized, but nonstationary, policies. We show that a round-robin type policy is optimal, and conjecture the same for a steering policy that depends on the entire past history of the process, but whose implementation requires essentially no more storage than that of a pure policy.

    Original languageEnglish (US)
    Pages (from-to)474-477
    Number of pages4
    JournalOperations Research
    Volume37
    Issue number3
    DOIs
    StatePublished - 1989

    ASJC Scopus subject areas

    • Computer Science Applications
    • Management Science and Operations Research

    Fingerprint

    Dive into the research topics of 'Randomized and past-dependent policies for Markov decision processes with multiple constraints'. Together they form a unique fingerprint.

    Cite this