Abstract
The Markov decision problem of locating a policy to maximize the long-run average reward subject to K long-run average cost constraints is considered. It is assumed that the state and action spaces are finite and the law of motion is unichain, that is, every pure policy gives rise to a Markov chain with one recurrent class. It is first proved that there exists an optimal stationary policy with a degree of randomization no greater than K; consequently, it is never necessary to randomize in more than K state. A linear program produces the optimal policy with limited randomization. For the special case of a single constraint, we also address the problem of finding optimal nonrandomized, but nonstationary, policies. We show that a round-robin type policy is optimal, and conjecture the same for a steering policy that depends on the entire past history of the process, but whose implementation requires essentially no more storage than that of a pure policy.
Original language | English (US) |
---|---|
Pages (from-to) | 474-477 |
Number of pages | 4 |
Journal | Operations Research |
Volume | 37 |
Issue number | 3 |
DOIs | |
State | Published - 1989 |
ASJC Scopus subject areas
- Computer Science Applications
- Management Science and Operations Research