Robust Finite-State Controllers for Uncertain POMDPs

被引:0
作者
Cubuktepe, Murat [1 ]
Jansen, Nils [2 ]
Junges, Sebastian [3 ]
Marandi, Ahmadreza [4 ]
Suilen, Marnix [2 ]
Topcu, Ufuk [1 ]
机构
[1] Univ Texas Austin, Dept Aerosp Engn & Engn Mech, Austin, TX 78712 USA
[2] Radboud Univ Nijmegen, Dept Software Sci, Nijmegen, Netherlands
[3] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[4] Eindhoven Univ Technol, Dept Ind Engn & Innovat Sci, Eindhoven, Netherlands
来源
THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2021年 / 35卷
关键词
MARKOV DECISION-PROCESSES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Uncertain partially observable Markov decision processes (uPOMDPs) allow the probabilistic transition and observation functions of standard POMDPs to belong to a so-called uncertainty set. Such uncertainty, referred to as epistemic uncertainty, captures uncountable sets of probability distributions caused by, for instance, a lack of data available. We develop an algorithm to compute finite-memory policies for uPOMDPs that robustly satisfy specifications against any admissible distribution. In general, computing such policies is theoretically and practically intractable. We provide an efficient solution to this problem in four steps. (1) We state the underlying problem as a nonconvex optimization problem with infinitely many constraints. (2) A dedicated dualization scheme yields a dual problem that is still nonconvex but has finitely many constraints. (3) We linearize this dual problem and (4) solve the resulting finite linear program to obtain locally optimal solutions to the original problem. The resulting problem formulation is exponentially smaller than those resulting from existing methods. We demonstrate the applicability of our algorithm using large instances of an aircraft collision-avoidance scenario and a novel spacecraft motion planning case study.
引用
收藏
页码:11792 / 11800
页数:9
相关论文
共 31 条
  • [11] El Chamie M, 2018, IEEE DECIS CONTR P, P5586, DOI 10.1109/CDC.2018.8619468
  • [12] Constrained Spacecraft Relative Motion Planning Exploiting Periodic Natural Motion Trajectories and Invariance
    Frey, Gregory R.
    Petersen, Christopher D.
    Leve, Frederick A.
    Kolmanovsky, Ilya V.
    Girard, Anouck R.
    [J]. JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2017, 40 (12) : 3100 - 3115
  • [13] Bounded-parameter markov decision processes
    Givan, R
    Leach, S
    Dean, T
    [J]. ARTIFICIAL INTELLIGENCE, 2000, 122 (1-2) : 71 - 109
  • [14] Multi-objective Robust Strategy Synthesis for Interval Markov Decision Processes
    Hahn, Ernst Moritz
    Hashemi, Vahid
    Hermanns, Holger
    Lahijanian, Morteza
    Turrini, Andrea
    [J]. QUANTITATIVE EVALUATION OF SYSTEMS (QEST 2017), 2017, 10503 : 207 - 223
  • [15] Hobbs K. L., 2020, AIAA SCITECH 2020 FO, P0877
  • [16] Partially observable Markov decision processes with imprecise parameters
    Itoh, Hideaki
    Nakamura, Kiyohiko
    [J]. ARTIFICIAL INTELLIGENCE, 2007, 171 (8-9) : 453 - 490
  • [17] Junges S, 2018, UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, P519
  • [18] Planning and acting in partially observable stochastic domains
    Kaelbling, LP
    Littman, ML
    Cassandra, AR
    [J]. ARTIFICIAL INTELLIGENCE, 1998, 101 (1-2) : 99 - 134
  • [19] Kim S, 2007, 2007 KDI-KAEA CONFERENCE ON ENHANCING PRODUCTIVITY AND SUSTAINING GROWTH, P1
  • [20] Applications of second-order cone programming
    Lobo, MS
    Vandenberghe, L
    Boyd, S
    Lebret, H
    [J]. LINEAR ALGEBRA AND ITS APPLICATIONS, 1998, 284 (1-3) : 193 - 228