Robust Satisficing Decision Making for Unmanned Aerial Vehicle Complex Missions under Severe Uncertainty

被引:8
作者
Ji, Xiaoting [1 ]
Niu, Yifeng [1 ]
Shen, Lincheng [1 ]
机构
[1] Natl Univ Def Technol, Coll Mechatron & Automat, Changsha, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
UAV;
D O I
10.1371/journal.pone.0166448
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
This paper presents a robust satisficing decision-making method for Unmanned Aerial Vehicles (UAVs) executing complex missions in an uncertain environment. Motivated by the info-gap decision theory, we formulate this problem as a novel robust satisficing optimization problem, of which the objective is to maximize the robustness while satisfying some desired mission requirements. Specifically, a new info-gap based Markov Decision Process (IMDP) is constructed to abstract the uncertain UAV system and specify the complex mission requirements with the Linear Temporal Logic (LTL). A robust satisficing policy is obtained to maximize the robustness to the uncertain IMDP while ensuring a desired probability of satisfying the LTL specifications. To this end, we propose a two-stage robust satisficing solution strategy which consists of the construction of a product IMDP and the generation of a robust satisficing policy. In the first stage, a product IMDP is constructed by combining the IMDP with an automaton representing the LTL specifications. In the second, an algorithm based on robust dynamic programming is proposed to generate a robust satisficing policy, while an associated robustness evaluation algorithm is presented to evaluate the robustness. Finally, through Monte Carlo simulation, the effectiveness of our algorithms is demonstrated on an UAV search mission under severe uncertainty so that the resulting policy can maximize the robustness while reaching the desired performance level. Furthermore, by comparing the proposed method with other robust decision-making methods, it can be concluded that our policy can tolerate higher uncertainty so that the desired performance level can be guaranteed, which indicates that the proposed method is much more effective in real applications.
引用
收藏
页数:35
相关论文
共 42 条
[1]  
[Anonymous], 1994, WILEY SERIES PROBABI, DOI DOI 10.1002/9780470316887
[2]  
[Anonymous], 2003, MR1626RPC RAND CORP
[3]  
[Anonymous], 2011, IFAC P
[4]   Greenhouse gas analyzer for measurements of carbon dioxide, methane, and water vapor aboard an unmanned aerial vehicle [J].
Berman, Elena S. F. ;
Fladeland, Matthew ;
Liem, Jimmy ;
Kolyer, Richard ;
Gupta, Manish .
SENSORS AND ACTUATORS B-CHEMICAL, 2012, 169 :128-135
[5]   Robust Adaptive Markov Decision Processes PLANNING WITH MODEL UNCERTAINTY [J].
Bertuccelli, Luca F. ;
Wu, Albert ;
How, Jonathan P. .
IEEE CONTROL SYSTEMS MAGAZINE, 2012, 32 (05) :96-109
[6]   Motion Planning with Complex Goals A Multilayered Synergistic Approach [J].
Bhatia, Amit ;
Maly, Matthew R. ;
Kavraki, Lydia E. ;
Vardi, Moshe Y. .
IEEE ROBOTICS & AUTOMATION MAGAZINE, 2011, 18 (03) :55-64
[7]  
Brown CB, 2006, STRUCT SAF, V33, P122
[8]  
Bulatov D, 2012, INT ARCH PHOTOGRAMME, V3822, P75, DOI [10.5194/isprsarchives-XXXVIII-1-C22-75-2011, DOI 10.5194/ISPRSARCHIVES-XXXVIII-1-C22-75-]
[9]   Evolutionary policy iteration for solving Markov decision processes [J].
Chang, HS ;
Lee, HG ;
Fu, MC ;
Marcus, SI .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2005, 50 (11) :1804-1808
[10]  
Christel BaierJoost-Pieter Katoen., 2008, Principles of model checking