In the context of sequential decision making under uncertainty, the Markov decision process (MDP) is a widely used mathematical framework. The MDP-based approaches in the infrastructure management literature can be broadly categorized as either top-down or bottom-up. The former, while efficient in incorporating system-level budget constraints, provide randomized policies, which must be mapped to individual facilities using additional subroutines. Conversely, although state-of-the-art bottom-up approaches provide facility-specific decisions, the disjointed nature of their problem formulation does not account for budget constraints in the future years. In this paper, a simultaneous network-level optimization framework is proposed, which seeks to bridge the gap between the top-down and bottom-up MDP-based approaches in infrastructure management. The salient feature of the approach is that it provides facility-specific policies for the current year of decision making while utilizing the randomized policies to calculate the expected future costs. Finally, the proposed methodology is compared to a state-of-the-art bottom-up methodology using a parametric study involving varying network sizes.
|Original language||English (US)|
|Journal||Journal of Infrastructure Systems|
|State||Published - Sep 1 2014|
- Markov decision process
ASJC Scopus subject areas
- Civil and Structural Engineering