TY - JOUR
T1 - DISCRETE-TIME EQUIVALENCE FOR CONSTRAINED SEMI-MARKOV DECISION PROCESSES.
AU - Beutler, Frederick J.
AU - Ross, Keith W.
N1 - Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 1985
Y1 - 1985
N2 - A continuous-time average-reward Markov-decision-process problem is most easily solved in terms of an equivalent discrete-time Markov decision process (DMDP). Customary hypotheses include that the process is a Markov jump process with denumerable state space and bounded transition rates, that actions are chosen at the jump points of the process, and that the policies considered are deterministic. An analogous uniformization result is derived which is applicable to semi-Markov decision process (SMDP) under a (possibly) randomized stationary policy. For each stationary policy governing an SMDP meeting certain hypotheses, a past-dependent policy on a suitably constructed DMDP is specified. The new policy carries the same average reward on the DMDP as the original policy on the SMDP. Discrete-time reduction is applied to optimization on a SMDP subject to a hard constraint, for which the optimal policy has been shown to be stationary and possibly randomized at no more than a single state. Under some convexity conditions on the reward, cost, and action space, it is shown that a nonrandomized policy is optimal for the constrained problem.
AB - A continuous-time average-reward Markov-decision-process problem is most easily solved in terms of an equivalent discrete-time Markov decision process (DMDP). Customary hypotheses include that the process is a Markov jump process with denumerable state space and bounded transition rates, that actions are chosen at the jump points of the process, and that the policies considered are deterministic. An analogous uniformization result is derived which is applicable to semi-Markov decision process (SMDP) under a (possibly) randomized stationary policy. For each stationary policy governing an SMDP meeting certain hypotheses, a past-dependent policy on a suitably constructed DMDP is specified. The new policy carries the same average reward on the DMDP as the original policy on the SMDP. Discrete-time reduction is applied to optimization on a SMDP subject to a hard constraint, for which the optimal policy has been shown to be stationary and possibly randomized at no more than a single state. Under some convexity conditions on the reward, cost, and action space, it is shown that a nonrandomized policy is optimal for the constrained problem.
UR - http://www.scopus.com/inward/record.url?scp=0022290735&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0022290735&partnerID=8YFLogxK
U2 - 10.1109/cdc.1985.268676
DO - 10.1109/cdc.1985.268676
M3 - Conference article
AN - SCOPUS:0022290735
SN - 0191-2216
SP - 1122
EP - 1123
JO - Proceedings of the IEEE Conference on Decision and Control
JF - Proceedings of the IEEE Conference on Decision and Control
ER -