DISCRETE-TIME EQUIVALENCE FOR CONSTRAINED SEMI-MARKOV DECISION PROCESSES.

Frederick J. Beutler, Keith W. Ross

    Research output: Contribution to journalConference articlepeer-review

    Abstract

    A continuous-time average-reward Markov-decision-process problem is most easily solved in terms of an equivalent discrete-time Markov decision process (DMDP). Customary hypotheses include that the process is a Markov jump process with denumerable state space and bounded transition rates, that actions are chosen at the jump points of the process, and that the policies considered are deterministic. An analogous uniformization result is derived which is applicable to semi-Markov decision process (SMDP) under a (possibly) randomized stationary policy. For each stationary policy governing an SMDP meeting certain hypotheses, a past-dependent policy on a suitably constructed DMDP is specified. The new policy carries the same average reward on the DMDP as the original policy on the SMDP. Discrete-time reduction is applied to optimization on a SMDP subject to a hard constraint, for which the optimal policy has been shown to be stationary and possibly randomized at no more than a single state. Under some convexity conditions on the reward, cost, and action space, it is shown that a nonrandomized policy is optimal for the constrained problem.

    Original languageEnglish (US)
    Pages (from-to)1122-1123
    Number of pages2
    JournalProceedings of the IEEE Conference on Decision and Control
    DOIs
    StatePublished - 1985

    ASJC Scopus subject areas

    • Control and Systems Engineering
    • Modeling and Simulation
    • Control and Optimization

    Fingerprint

    Dive into the research topics of 'DISCRETE-TIME EQUIVALENCE FOR CONSTRAINED SEMI-MARKOV DECISION PROCESSES.'. Together they form a unique fingerprint.

    Cite this