Transcend Adversarial Examples: Diversified Adversarial Attacks to Test Deep Learning Model

被引:1
作者
Kong, Wei [1 ]
机构
[1] Natl Key Lab Sci & Technol Informat Syst Secur, Beijing, Peoples R China
来源
2023 IEEE 41ST INTERNATIONAL CONFERENCE ON COMPUTER DESIGN, ICCD | 2023年
关键词
Adversarial Attack; Diversity; Robustness and Security; Test Deep Learning Model;
D O I
10.1109/ICCD58817.2023.00013
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Existing optimized adversarial attacks rely on the ability to search for perturbation within lp norm while keeping maximized loss for highly non-convex loss functions. Random initialization perturbation and the steepest gradient direction strategy are efficient techniques to prevent falling into local optima but compromise the capability of diversity exploration. Therefore, we introduce the Diversity-Driven Adversarial Attack (DAA), which incorporates Output Diversity Strategy (ODS) and diverse initialization gradient direction into the optimized adversarial attack algorithm, aiming to refine the inherent properties of the adversarial examples (AEs). More specifically, we design a diversity-promoting regularizer to penalize the insignificant distance between initialization gradient directions based on the version of ODS. Extensive experiments demonstrate that DAA can efficiently improve existing coverage criteria without sacrificing the performance of attack success rate, which implies that DAA can implicitly explore more internal model logic of DL model.
引用
收藏
页码:13 / 20
页数:8
相关论文
共 31 条
[31]  
Zhao Y, 2021, 2021 IEEE/ACIS 21ST INTERNATIONAL FALL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE (ICIS 2021-FALL), P56, DOI [10.1109/ICISFALL51598.2021.9627426, 10.1109/ICISFall51598.2021.9627426]