This paper investigates a class of optimal control problems associated with Markov processes with local state information. The decision-maker has only a local access to a subset of a state vector information as often encountered in decentralized control problems in multiagent systems. Under this information structure, part of the state vector cannot be observed. We leverage ab initio principles and find a new form of Bellman equations to characterize the optimal policies of the control problem under local information structures. The dynamic programming solutions feature a mixture of dynamics associated unobservable state components and the local state-feedback policy based on the observable local information. We further characterize the optimal local-state feedback policy using linear programming methods. To reduce the computational complexity of the optimal policy, we propose an approximate algorithm based on virtual beliefs to find a sub-optimal policy. We show the performance bounds on the sub-optimal solution and corroborate the results with numerical case studies.
- Approximate algorithms
- Bellman equation
- Distributed control
- Linear programming
- Partially observable markov decision process
ASJC Scopus subject areas
- Control and Systems Engineering