Adversarial retraining attack of asynchronous advantage actor-critic based pathfinding

被引:2
作者
Chen Tong [1 ]
Liu Jiqiang [1 ]
Xiang Yingxiao [1 ]
Niu Wenjia [1 ]
Tong Endong [1 ]
Wang Shuoru [1 ]
Li He [1 ]
Chang Liang [2 ]
Li Gang [3 ]
Alfred, Chen Qi [4 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Secur & Privacy Intelligent Trans, 3 Shangyuan Village, Beijing 100044, Peoples R China
[2] Guilin Univ Elect Technol, Guangxi Key Lab Trusted Software, Guilin, Peoples R China
[3] Deakin Univ, Sch Informat Technol, Geelong, Vic, Australia
[4] Univ Calif Irvine, Donald Bren Sch Informat & Comp Sci ICS, Irvine, CA USA
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
A3C; evasion attack; pathfinding; reinforcement learning; retraining attack;
D O I
10.1002/int.22380
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pathfinding becomes an important component in many real-world scenarios, such as popular warehouse systems and autonomous aircraft towing vehicles. With the development of reinforcement learning (RL) especially in the context of asynchronous advantage actor-critic (A3C), pathfinding is undergoing a revolution in terms of efficient parallel learning. Similar to other artificial intelligence-based applications, A3C-based pathfinding is also threatened by the adversarial attack. In this paper, we are the first to study the adversarial attack to A3C, that can unexpectedly wake up longtime retraining mechanism until successful pathfinding. We also discover an attack example generation to launch the attack based on gradient band, in which only one baffle of extremely few unit lengths can successfully perform the attack. Experiments with detailed analysis are conducted to show a high attack success rate of 95% with an average baffle length of 2.95. We also discuss defense suggestions leveraging the insights from our analysis.
引用
收藏
页码:2323 / 2346
页数:24
相关论文
共 33 条
[1]  
Alzantot Moustafa, 2017, 31 C NEUR INF PROC S
[2]  
Andreychuk A, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P39
[3]  
Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
[4]  
Behzadan V., 2019, IEEE Intelligent Transportation Systems Magazine
[5]   A MARKOVIAN DECISION PROCESS [J].
BELLMAN, R .
JOURNAL OF MATHEMATICS AND MECHANICS, 1957, 6 (05) :679-684
[6]   DYNAMIC PROGRAMMING [J].
BELLMAN, R .
SCIENCE, 1966, 153 (3731) :34-&
[7]  
Biggio B., 2011, P AS C MACH LEARN TA, P97
[8]  
Bjorck, 1996, NUMERICAL METHODS LE, V66
[9]  
Bojchevski A., 2018, INT C MACH LEARN, P695
[10]   Audio Adversarial Examples: Targeted Attacks on Speech-to-Text [J].
Carlini, Nicholas ;
Wagner, David .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, :1-7