Robust Deep Learning Models against Semantic-Preserving Adversarial Attack

被引:1
作者
Zhao, Yunce [1 ,2 ]
Gao, Dashan [1 ,3 ]
Yao, Yinghua [1 ,2 ]
Zhang, Zeqi [4 ]
Mao, Bifei [4 ]
Yao, Xin [1 ]
机构
[1] SUSTech, Dept CSE, Shenzhen, Peoples R China
[2] Univ Technol Sydney, Sydney, NSW, Australia
[3] HKUST, Hong Kong, Peoples R China
[4] Huawei Technol Co Ltd, Shenzhen, Peoples R China
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
基金
中国国家自然科学基金;
关键词
Adversarial Examples; Natural Perturbation; Adversarial Perturbation; Robustness;
D O I
10.1109/IJCNN54540.2023.10191198
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models can be fooled by small l(p)-norm adversarial perturbations and natural perturbations in terms of attributes. Although the robustness against each perturbation has been explored, it remains a challenge to address the robustness against joint perturbations effectively. In this paper, we study the robustness of deep learning models against joint perturbations by proposing a novel attack mechanism named Semantic-Preserving Adversarial (SPA) attack, which can then be used to enhance adversarial training. Specifically, we introduce an attribute manipulator to generate natural and human-comprehensible perturbations and a noise generator to generate diverse adversarial noises. Based on such combined noises, we optimize both the attribute value and the diversity variable to generate jointlyperturbed samples. For robust training, we adversarially train the deep learning model against the generated joint perturbations. Empirical results on four benchmarks show that the SPA attack causes a larger performance decline with small l1 norm-ball constraints compared to existing approaches. Furthermore, our SPA-enhanced training outperforms existing defense methods against such joint perturbations.
引用
收藏
页数:8
相关论文
共 50 条
[21]   Symmetric adversarial poisoning against deep learning [J].
Chan-Hon-Tong, Adrien .
2020 TENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA), 2020,
[22]   Detection Tolerant Black-Box Adversarial Attack Against Automatic Modulation Classification With Deep Learning [J].
Qi, Peihan ;
Jiang, Tao ;
Wang, Lizhan ;
Yuan, Xu ;
Li, Zan .
IEEE TRANSACTIONS ON RELIABILITY, 2022, 71 (02) :674-686
[23]   Adversarial Attack and Defense for Webshell Detection on Machine Learning Models [J].
Zhang, Qian ;
Chen, Lishen ;
Yan, Qiao .
2022 INTERNATIONAL CONFERENCE ON CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY, CYBERC, 2022, :33-41
[24]   EFFICIENT BLACK-BOX ADVERSARIAL ATTACK ON DEEP CLUSTERING MODELS [J].
Yang, Nan ;
Li, Zihan ;
Long, Zhen ;
Huang, Xiaolin ;
Zhu, Ce ;
Liu, Yipeng .
2024 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2024, :1044-1049
[25]   MIGAA: A Physical Adversarial Attack Method against SAR Recognition Models [J].
Xie, Jianyue ;
Peng, Bo ;
Lu, Zhengzhi ;
Zhou, Jie ;
Peng, Bowen .
2024 9TH INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATION SYSTEMS, ICCCS 2024, 2024, :309-314
[26]   Adversarially Enhanced Learning (AEL): Robust lightweight deep learning approach for radiology image classification against adversarial attacks [J].
Singh, Anshu ;
Singh, Maheshwari Prasad ;
Singh, Amit Kumar .
IMAGE AND VISION COMPUTING, 2025, 154
[27]   Evaluating Pretrained Deep Learning Models for Image Classification Against Individual and Ensemble Adversarial Attacks [J].
Rahman, Mafizur ;
Roy, Prosenjit ;
Frizell, Sherri S. ;
Qian, Lijun .
IEEE ACCESS, 2025, 13 :35230-35242
[28]   Adversarial Attack Detection in Smart Grids Using Deep Learning Architectures [J].
Ness, Stephanie .
IEEE ACCESS, 2025, 13 :16314-16323
[29]   Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning [J].
Apostolidis, Kyriakos D. ;
Papakostas, George A. .
JOURNAL OF IMAGING, 2022, 8 (06)
[30]   DOME-T: Adversarial Computer Vision Attack on Deep Learning Models Based on Tchebichef Image Moments [J].
Maliamanis, T. ;
Papakostas, G. A. .
THIRTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2020), 2021, 11605