Reward-Poisoning Attacks on Offline Multi-Agent Reinforcement Learning

被引:0
作者
Wu, Young [1 ]
McMahan, Jeremy [1 ]
Zhu, Xiaojin [1 ]
Xie, Qiaomin [1 ]
机构
[1] Univ Wisconsin Madison, Madison, WI 53706 USA
来源
THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In offline multi-agent reinforcement learning (MARL), agents estimate policies from a given dataset. We study reward-poisoning attacks in this setting where an exogenous attacker modifies the rewards in the dataset before the agents see the dataset. The attacker wants to guide each agent into a nefarious target policy while minimizing the Lp norm of the reward modification. Unlike attacks on single-agent RL, we show that the attacker can install the target policy as a Markov Perfect Dominant Strategy Equilibrium (MPDSE), which rational agents are guaranteed to follow. This attack can be significantly cheaper than separate single-agent attacks. We show that the attack works on various MARL agents including uncertainty-aware learners, and we exhibit linear programs to efficiently solve the attack problem. We also study the relationship between the structure of the datasets and the minimal attack cost. Our work paves the way for studying defense in offline MARL.
引用
收藏
页码:10426 / 10434
页数:9
相关论文
共 41 条
[1]  
Anderson A., 2010, P 9 INT C AUTONOMOUS, V1, P191
[2]  
Banihashem K., 2022, arXiv
[3]  
Banihashem Kiarash, 2021, arXiv
[4]  
Bogunovic I, 2021, PR MACH LEARN RES, V130
[5]  
Cui QW, 2022, Arxiv, DOI arXiv:2201.03522
[6]  
Garcelon E, 2020, Arxiv, DOI arXiv:2002.03839
[7]  
Gleave A., 2019, arXiv
[8]  
Guan ZW, 2020, AAAI CONF ARTIF INTE, V34, P4036
[9]  
Guo Wenbo, 2021, P MACHINE LEARNING R, V139
[10]   Deceptive Reinforcement Learning Under Adversarial Manipulations on Cost Signals [J].
Huang, Yunhan ;
Zhu, Quanyan .
DECISION AND GAME THEORY FOR SECURITY, 2019, 11836 :217-237