Forming Adversarial Example Attacks Against Deep Neural Networks With Reinforcement Learning

被引:3
作者
Akers, Matthew [1 ]
Barton, Armon [2 ]
机构
[1] US Second Fleet, Hampton Rd, Norfolk, VA 23455 USA
[2] Dept Comp Sci Naval Postgrad Sch, Dept Comp Sci, Monterey, CA 93943 USA
关键词
Deep learning; Perturbation methods; Reinforcement learning; Artificial neural networks; GAME; GO;
D O I
10.1109/MC.2023.3324751
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a novel reinforcement learning-based adversarial example attack, Adversarial Reinforcement Learning Agent, designed to learn imperceptible perturbation that causes misclassification when added to the input of a deep learning classifier.
引用
收藏
页码:88 / 99
页数:12
相关论文
共 18 条
[1]   Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search [J].
Andriushchenko, Maksym ;
Croce, Francesco ;
Flammarion, Nicolas ;
Hein, Matthias .
COMPUTER VISION - ECCV 2020, PT XXIII, 2020, 12368 :484-501
[2]  
Barton Armon, 2018, Defending Neural Networks Against Adversarial Examples
[3]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[4]  
Croce F, 2020, PR MACH LEARN RES, V119
[5]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[6]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[7]  
Goodfellow J., 2015, Explaining and harnessing adversarial examples
[8]  
Krizhevsky A., 2009, Master's thesis
[9]  
Madry A, 2019, Arxiv, DOI arXiv:1706.06083
[10]  
Mnih K. Kavukcuoglu, 2013, Playing Atari with deep reinforcement learning