Clustering-based attack detection for adversarial reinforcement learning

被引:1
作者
Majadas, Ruben [1 ]
Garcia, Javier [2 ]
Fernandez, Fernando [1 ]
机构
[1] Univ Carlos III Madrid, Dept Informat, Ave Univ 30, Madrid 28911, Spain
[2] Univ Santiago De Compostela, Rua Lope Gomez De Marzoa S-N, Santiago De Compostela 15782, Spain
关键词
Adversarial reinforcement learning; Adversarial attacks; Change-point detection; Clustering applications; MODEL;
D O I
10.1007/s10489-024-05275-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Detecting malicious attacks presents a major challenge in the field of reinforcement learning (RL), as such attacks can force the victim to perform abnormal actions, with potentially severe consequences. To mitigate these risks, current research focuses on the enhancement of RL algorithms with efficient detection mechanisms, especially for real-world applications. Adversarial attacks have the potential to alter the environmental dynamics of a Markov Decision Process (MDP) perceived by an RL agent. Leveraging these changes in dynamics, we propose a novel approach to detect attacks. Our contribution can be summarized in two main aspects. Firstly, we propose a novel formalization of the attack detection problem that entails analyzing modifications made by attacks to the transition and reward dynamics within the environment. This problem can be framed as a context change detection problem, where the goal is to identify the transition from a "free-of-attack" situation to an "under-attack" scenario. To solve this problem, we propose a groundbreaking "model-free" clustering-based countermeasure. This approach consists of two essential steps: first, partitioning the transition space into clusters, and then using this partitioning to identify changes in environmental dynamics caused by adversarial attacks. To assess the efficiency of our detection method, we performed experiments on four established RL domains (grid-world, mountain car, carpole, and acrobot) and subjected them to four advanced attack types. Uniform, Strategically-timed, Q-value, and Multi-objective. Our study proves that our technique has a high potential for perturbation detection, even in scenarios where attackers employ more sophisticated strategies.
引用
收藏
页码:2631 / 2647
页数:17
相关论文
共 31 条
[1]  
Alegre L.N., 2021, P 20 INT C AUTONOMOU, P97
[2]  
[Anonymous], 1993, Detection of abrupt changes: theory and application
[3]  
Behzadan Vahid, 2017, Machine Learning and Data Mining in Pattern Recognition. 13th International Conference, MLDM 2017. Proceedings: LNAI 10358, P262, DOI 10.1007/978-3-319-62416-7_19
[4]   Model-Free Non-Stationarity Detection and Adaptation in Reinforcement Learning [J].
Canonaco, Giuseppe ;
Restelli, Marcello ;
Roveri, Manuel .
ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 :1047-1054
[5]   Adversarial attack and defense in reinforcement learning-from AI security view [J].
Chen, Tong ;
Liu, Jiqiang ;
Xiang, Yingxiao ;
Niu, Wenjia ;
Tong, Endong ;
Han, Zhen .
CYBERSECURITY, 2019, 2 (01)
[6]  
Da Silva Bruno C, 2006, P 23 INT C MACHINE L, P217, DOI DOI 10.1145/1143844.1143872
[7]   An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models [J].
Deng, Yao ;
Zheng, Xi ;
Zhang, Tianyi ;
Chen, Chen ;
Lou, Guannan ;
Kim, Miryung .
2020 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS (PERCOM 2020), 2020,
[8]  
Everitt T, 2017, PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P4705
[9]   Learning adversarial attack policies through multi-objective reinforcement learning [J].
Garcia, Javier ;
Majadas, Ruben ;
Fernandez, Fernando .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 96
[10]  
Ghosh BK., 1991, Handbook of sequential analysis