Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint

被引:3
作者
Chen, Dejun [1 ]
Zeng, Yunxiu [1 ]
Zhang, Yi [1 ]
Li, Shuilin [1 ]
Xu, Kai [1 ]
Yin, Quanjun [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha 410073, Peoples R China
关键词
deception; deceptiveness; path planning; goal recognition; count-based reinforcement learning; RECOGNITION;
D O I
10.3390/math12131979
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Deceptive path planning (DPP) aims to find a path that minimizes the probability of the observer identifying the real goal of the observed before it reaches. It is important for addressing issues such as public safety, strategic path planning, and logistics route privacy protection. Existing traditional methods often rely on "dissimulation"-hiding the truth-to obscure paths while ignoring the time constraints. Building upon the theory of probabilistic goal recognition based on cost difference, we proposed a DPP method, DPP_Q, based on count-based Q-learning for solving the DPP problems in discrete path-planning domains under specific time constraints. Furthermore, to extend this method to continuous domains, we proposed a new model of probabilistic goal recognition called the Approximate Goal Recognition Model (AGRM) and verified its feasibility in discrete path-planning domains. Finally, we also proposed a DPP method based on proximal policy optimization for continuous path-planning domains under specific time constraints called DPP_PPO. DPP methods like DPP_Q and DPP_PPO are types of research that have not yet been explored in the field of path planning. Experimental results show that, in discrete domains, compared to traditional methods, DPP_Q exhibits better effectiveness in enhancing the average deceptiveness of paths. (Improved on average by 12.53% compared to traditional methods). In continuous domains, DPP_PPO shows significant advantages over random walk methods. Both DPP_Q and DPP_PPO demonstrate good applicability in path-planning domains with uncomplicated obstacles.
引用
收藏
页数:20
相关论文
共 38 条
  • [1] Bayesian models for keyhole plan recognition in an adventure game
    Albrecht, DW
    Zukerman, I
    Nicholson, AE
    [J]. USER MODELING AND USER-ADAPTED INTERACTION, 1998, 8 (1-2) : 5 - 47
  • [2] Liar, liar, working memory on fire: Investigating the role of working memory in childhood verbal deception
    Alloway, Tracy Packiam
    McCallum, Fiona
    Alloway, Ross G.
    Hoicka, Elena
    [J]. JOURNAL OF EXPERIMENTAL CHILD PSYCHOLOGY, 2015, 137 : 30 - 38
  • [3] AVRAHAMI-ZILBERBRAND D., 2007, PROC 22 AAAI C RENCE, P944
  • [4] Novelty or Surprise?
    Barto, Andrew
    Mirolli, Marco
    Baldassarre, Gianluca
    [J]. FRONTIERS IN PSYCHOLOGY, 2013, 4
  • [5] Bellemare MG, 2016, ADV NEUR IN, V29
  • [6] Braynov S., 2006, SEC KNOWL MAN WORKSH, P67
  • [7] BUI H.H., 2003, IJCAI P INT JOINT C, V3, P1309
  • [8] Charniak EugeneRobert Prescott Goldman., 1991, PROBABILISTIC ABDUCT
  • [9] Cohen P.R., 2014, Strategies for Natural Language Processing, P245
  • [10] A probabilistic plan recognition algorithm based on plan tree grammars
    Geib, Christopher W.
    Goldman, Robert P.
    [J]. ARTIFICIAL INTELLIGENCE, 2009, 173 (11) : 1101 - 1132