The Complexity of Markov Decision Processes
-
- Christos H. Papadimitriou
- Departments of Computer Science and Operations Research, Stanford University, Stanford, California 94305
-
- John N. Tsitsiklis
- Department of Electrical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
Search this article
Description
<jats:p> We investigate the complexity of the classical problem of optimal policy computation in Markov decision processes. All three variants of the problem (finite horizon, infinite horizon discounted, and infinite horizon average cost) were known to be solvable in polynomial time by dynamic programming (finite horizon problems), linear programming, or successive approximation techniques (infinite horizon). We show that they are complete for P, and therefore most likely cannot be solved by highly parallel algorithms. We also show that, in contrast, the deterministic cases of all three problems can be solved very fast in parallel. The version with partially observed states is shown to be PSPACE-complete, and thus even less likely to be solved in polynomial time than the NP-complete problems; in fact, we show that, most likely, it is not possible to have an efficient on-line implementation (involving polynomial time on-line computations and memory) of an optimal policy, even if an arbitrary amount of precomputation is allowed. Finally, the variant of the problem in which there are no observations is shown to be NP-complete. </jats:p>
Journal
-
- Mathematics of Operations Research
-
Mathematics of Operations Research 12 (3), 441-450, 1987-08
Institute for Operations Research and the Management Sciences (INFORMS)
- Tweet
Details 詳細情報について
-
- CRID
- 1360574095180323840
-
- NII Article ID
- 30041447125
-
- NII Book ID
- AA00295770
-
- ISSN
- 15265471
- 0364765X
-
- Data Source
-
- Crossref
- CiNii Articles