Abstract
This paper investigates a class of optimal control problems associated with Markov processes with local state information. The decision-maker has only a local access to a subset of a state vector information as often encountered in decentralized control problems in multiagent systems. Under this information structure, part of the state vector cannot be observed. We leverage ab initio principles and find a new form of Bellman equations to characterize the optimal policies of the control problem under local information structures. The dynamic programming solutions feature a mixture of dynamics associated unobservable state components and the local state-feedback policy based on the observable local information. We further characterize the optimal local-state feedback policy using linear programming methods. To reduce the computational complexity of the optimal policy, we propose an approximate algorithm based on virtual beliefs to find a sub-optimal policy. We show the performance bounds on the sub-optimal solution and corroborate the results with numerical case studies.
Original language | English (US) |
---|---|
Pages (from-to) | 6881-6886 |
Number of pages | 6 |
Journal | IFAC-PapersOnLine |
Volume | 53 |
Issue number | 2 |
DOIs | |
State | Published - 2020 |
Event | 21st IFAC World Congress 2020 - Berlin, Germany Duration: Jul 12 2020 → Jul 17 2020 |
Keywords
- Approximate algorithms
- Bellman equation
- Distributed control
- Linear programming
- Partially observable markov decision process
ASJC Scopus subject areas
- Control and Systems Engineering