DeepRover: A Query-Efficient Blackbox Attack for Deep Neural Networks

被引:5
作者
Zhang, Fuyuan [1 ]
Hu, Xinwen [2 ]
Ma, Lei [3 ,4 ]
Zhao, Jianjun [1 ]
机构
[1] Kyushu Univ, Fukuoka, Japan
[2] Hunan Normal Univ, Changsha, Peoples R China
[3] Univ Tokyo, Tokyo, Japan
[4] Univ Alberta, Edmonton, AB, Canada
来源
PROCEEDINGS OF THE 31ST ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2023 | 2023年
基金
加拿大自然科学与工程研究理事会;
关键词
Adversarial Attacks; Deep Neural Networks; Blackbox Fuzzing;
D O I
10.1145/3611643.3616370
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep neural networks (DNNs) achieved a significant performance breakthrough over the past decade and have been widely adopted in various industrial domains. However, a fundamental problem regarding DNN robustness is still not adequately addressed, which can potentially lead to many quality issues after deployment, e.g., safety, security, and reliability. An adversarial attack is one of the most commonly investigated techniques to penetrate a DNN by misleading the DNN's decision through the generation of minor perturbations in the original inputs. More importantly, the adversarial attack is a crucial way to assess, estimate, and understand the robustness boundary of a DNN. Intuitively, a stronger adversarial attack can help obtain a tighter robustness boundary, allowing us to understand the potential worst-case scenario when a DNN is deployed. To push this further, in this paper, we propose DeepRover, a fuzzing-based blackbox attack for deep neural networks used for image classification. We show that DeepRover is more effective and query-efficient in generating adversarial examples than state-of-the-art blackbox attacks. Moreover, DeepRover can find adversarial examples at a finer-grained level than other approaches.
引用
收藏
页码:1384 / 1394
页数:11
相关论文
共 55 条
[1]  
Al-Dujaili A., 2020, ICLR
[2]   Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search [J].
Andriushchenko, Maksym ;
Croce, Francesco ;
Flammarion, Nicolas ;
Hein, Matthias .
COMPUTER VISION - ECCV 2020, PT XXIII, 2020, 12368 :484-501
[3]   Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms [J].
Bhagoji, Arjun Nitin ;
He, Warren ;
Li, Bo ;
Song, Dawn .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :158-174
[4]  
Brendel Wieland, 2018, P INT C LEARN REPR
[5]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[6]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448
[7]   Boosting Decision-Based Black-Box Adversarial Attacks with Random Sign Flip [J].
Chen, Weilun ;
Zhang, Zhaoxiang ;
Hu, Xiaolin ;
Wu, Baoyuan .
COMPUTER VISION - ECCV 2020, PT XV, 2020, 12360 :276-293
[8]  
Cheng M., 2019, ICLR
[9]  
Cheng Minhao, 2020, ICLR
[10]   DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems [J].
Du, Xiaoning ;
Xie, Xiaofei ;
Li, Yi ;
Ma, Lei ;
Liu, Yang ;
Zhao, Jianjun .
ESEC/FSE'2019: PROCEEDINGS OF THE 2019 27TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2019, :477-487