Transcend Adversarial Examples: Diversified Adversarial Attacks to Test Deep Learning Model

被引:1
作者
Kong, Wei [1 ]
机构
[1] Natl Key Lab Sci & Technol Informat Syst Secur, Beijing, Peoples R China
来源
2023 IEEE 41ST INTERNATIONAL CONFERENCE ON COMPUTER DESIGN, ICCD | 2023年
关键词
Adversarial Attack; Diversity; Robustness and Security; Test Deep Learning Model;
D O I
10.1109/ICCD58817.2023.00013
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Existing optimized adversarial attacks rely on the ability to search for perturbation within lp norm while keeping maximized loss for highly non-convex loss functions. Random initialization perturbation and the steepest gradient direction strategy are efficient techniques to prevent falling into local optima but compromise the capability of diversity exploration. Therefore, we introduce the Diversity-Driven Adversarial Attack (DAA), which incorporates Output Diversity Strategy (ODS) and diverse initialization gradient direction into the optimized adversarial attack algorithm, aiming to refine the inherent properties of the adversarial examples (AEs). More specifically, we design a diversity-promoting regularizer to penalize the insignificant distance between initialization gradient directions based on the version of ODS. Extensive experiments demonstrate that DAA can efficiently improve existing coverage criteria without sacrificing the performance of attack success rate, which implies that DAA can implicitly explore more internal model logic of DL model.
引用
收藏
页码:13 / 20
页数:8
相关论文
共 31 条
[1]   Black-Box Testing of Deep Neural Networks through Test Case Diversity [J].
Aghababaeyan, Zohreh ;
Abdellatif, Manel ;
Briand, Lionel ;
Ramesh, S. ;
Bagherzadeh, Mojtaba .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2023, 49 (05) :3182-3204
[2]  
[Anonymous], 2023, Diversity-Driven adversarial attack
[3]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[4]   Deep learning for cyber security intrusion detection: Approaches, datasets, and comparative study [J].
Ferrag, Mohamed Amine ;
Maglaras, Leandros ;
Moschoyiannis, Sotiris ;
Janicke, Helge .
JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2020, 50
[5]  
Fischer F, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P339
[6]   Importance-Driven Deep Learning System Testing [J].
Gerasimou, Simos ;
Eniser, Hasan Ferit ;
Sen, Alper ;
Cakan, Alper .
2020 ACM/IEEE 42ND INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2020), 2020, :702-713
[7]  
Gupta K, 2022, AAAI CONF ARTIF INTE, P6810
[8]   Is Neuron Coverage a Meaningful Measure for Testing Deep Neural Networks? [J].
Harel-Canada, Fabrice ;
Wang, Lingxiao ;
Gulzar, Muhammad Ali ;
Gu, Quanquan ;
Kim, Miryung .
PROCEEDINGS OF THE 28TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '20), 2020, :851-862
[9]  
Huang W, 2023, Arxiv, DOI arXiv:2205.08589
[10]  
Idelbayev Y., 2021, Proper ResNet implementation for CIFAR10/CIFAR100 in PyTorch