Perception-Driven Imperceptible Adversarial Attack Against Decision-Based Black-Box Models

被引:3
作者
Zhang, Shenyi [1 ]
Zheng, Baolin [2 ]
Jiang, Peipei [1 ]
Zhao, Lingchen [1 ]
Shen, Chao [3 ]
Wang, Qian [1 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan 430072, Peoples R China
[2] Alibaba Grp, Beijing 100102, Peoples R China
[3] Xi An Jiao Tong Univ, Sch Cyber Sci & Engn, Key Lab Intelligent Networks & Network Secur, Minist Educ MOE, Xian 710049, Peoples R China
关键词
Perturbation methods; Closed box; Measurement; Optimization; Computational modeling; Glass box; Analytical models; Adversarial example; decision-based attack; imperceptible attack;
D O I
10.1109/TIFS.2024.3359441
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Adversarial examples (AEs) pose significant threats to deep neural networks (DNNs), as they can deceive models into making wrong predictions through craftily-designed perturbations. The emergence of decision-based attacks, which rely solely on the top-1 decision label, further increases risks for real-world black-box models. Currently, the prevailing practice for generating AEs in the decision-based setting involves penalizing adversarial perturbations with the $\ell _{p}$ -norm. However, this approach often overlooks the human perception of adversarial perturbations in real-world scenarios. To tackle this issue, we propose a novel and efficient Imperceptible Decision-based Black-box Attack (IDBA). Our method prioritizes optimizing the perception-related distribution of perturbations, rather than solely focusing on the $\ell _{p}$ -norm. Specifically, IDBA analyzes the perceptual preferences of both models and the human vision system, selectively perturbing components that influence model decisions yet remain imperceptible to human eyes. Extensive experiments demonstrate that IDBA outperforms the state-of-the-art methods in terms of invisibility and query efficiency. Notably, IDBA achieves a high Feature SIMilarity (FSIM) score of 0.92 with only 4,800 queries, while simultaneously reducing the Learned Perceptual Image Patch Similarity (LPIPS) to 0.12, showcasing its ability to remain imperceptible.
引用
收藏
页码:3164 / 3177
页数:14
相关论文
共 58 条
[1]  
Brendel W., 2018, INT C LEARN REPR ICL
[2]   On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks [J].
Byun, Junyoung ;
Go, Hyojun ;
Kim, Changick .
2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, :3819-3828
[3]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[4]   Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain [J].
Chen, Guangyao ;
Peng, Peixi ;
Ma, Li ;
Li, Jia ;
Du, Lin ;
Tian, Yonghong .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :448-457
[5]   HopSkipJumpAttack: A Query-Efficient Decision-Based Attack [J].
Chen, Jianbo ;
Jordan, Michael, I ;
Wainwright, Martin J. .
2020 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2020), 2020, :1277-1294
[6]   RayS: A Ray Searching Method for Hard-label Adversarial Attack [J].
Chen, Jinghui ;
Gu, Quanquan .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :1739-1747
[7]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448
[8]  
Chen YX, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P2667
[9]  
Cheng M., 2019, P ICLR
[10]  
Cheng M., 2020, P ICLR