Abstract
Markov decision theory is applied to general Markov queueing networks with finite buffer capacity. Existence of optimal dynamic routing policies is proved for the long-run average and infinite-horizon discounted cases. With the aid of a process that is equivalent to the state process, the subordinated process, fast algorithms are derived for locating an optimal routing policy. A numerical example is given and the application of the theory to computer communication networks is discussed.
Original language | English (US) |
---|---|
Pages (from-to) | 367-370 |
Number of pages | 4 |
Journal | Automatica |
Volume | 22 |
Issue number | 3 |
DOIs | |
State | Published - May 1986 |
Keywords
- Markov processes
- computer communication
- dynamic programming
- queueing control
- traffic control
ASJC Scopus subject areas
- Control and Systems Engineering
- Electrical and Electronic Engineering