A Comprehensive Understanding of the Impact of Data Augmentation on the Transferability of 3D Adversarial Examples

被引:1
作者
Qian, Fulan [1 ,2 ]
Zou, Yuanjun [3 ]
Xu, Mengyao [3 ]
Zhang, Xuejun [3 ]
Zhang, Chonghao [3 ]
Xu, Chenchu [1 ]
Chen, Hai [3 ]
机构
[1] Anhui Univ, Artificial Intelligence Inst, Hefei, Peoples R China
[2] Anhui Univ, Informat Mat & Intelligent Sensing Lab Anhui Prov, Hefei, Peoples R China
[3] Anhui Univ, Sch Comp Sci & Technol, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
Data augmentation; transferability; adversarial examples; POINT CLOUDS; ATTACKS; ROBUSTNESS;
D O I
10.1145/3673232
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
3D point cloud classifiers exhibit vulnerability to imperceptible perturbations, which poses a serious threat to the security and reliability of deep learning models in practical applications, making the robustness evaluation of deep 3D point cloud models increasingly important. Due to the difficulty in obtaining model parameters, black-box attacks have become a mainstream means of assessing the adversarial robustness of 3D classification models. The core of improving the transferability of adversarial examples generated by black-box attacks is to generate better generalized adversarial examples, where data augmentation has become one of the popular approaches. In this article, we employ five mainstream attack methods and combine six data augmentation strategies, namely point dropping, flipping, rotating, scaling, shearing, and translating, in order to comprehensively explore the impact of these strategies on the transferability of adversarial examples. Our research reveals that data augmentation methods generally improve the transferability of the adversarial examples, and the effect is better when the methods are stacked. The interaction between data augmentation methods, model characteristics, attack, and defense strategies collectively determines the transferability of adversarial examples. In order to comprehensively understand and improve the effectiveness of adversarial examples, it is necessary to comprehensively consider these complex interrelationships.
引用
收藏
页数:41
相关论文
共 88 条
[1]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[2]   DGCNN: A convolutional neural network over large-scale labeled graphs [J].
Anh Viet Phan ;
Minh Le Nguyen ;
Yen Lam Hoang Nguyen ;
Lam Thu Bui .
NEURAL NETWORKS, 2018, 108 :533-543
[3]   Self-driving cars: A survey [J].
Badue, Claudine ;
Guidolini, Ranik ;
Carneiro, Raphael Vivacqua ;
Azevedo, Pedro ;
Cardoso, Vinicius B. ;
Forechi, Avelino ;
Jesus, Luan ;
Berriel, Rodrigo ;
Paixao, Thiago M. ;
Mutz, Filipe ;
Veronese, Lucas de Paula ;
Oliveira-Santos, Thiago ;
De Souza, Alberto F. .
EXPERT SYSTEMS WITH APPLICATIONS, 2021, 165
[4]  
Brendel Wieland., 2019, Advances in Neural Information Processing Systems, P12841
[5]   Metrics and methods for robustness evaluation of neural networks with generative models [J].
Buzhinsky, Igor ;
Nerinovsky, Arseny ;
Tripakis, Stavros .
MACHINE LEARNING, 2023, 112 (10) :3977-4012
[6]  
Carlini N, 2019, Arxiv, DOI arXiv:1902.06705
[7]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[8]   Training Robust Deep Collaborative Filtering Models via Adversarial Noise Propagation [J].
Chen, Hai ;
Qian, Fulan ;
Liu, Chang ;
Zhang, Yanping ;
Su, Hang ;
Zhao, Shu .
ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (01)
[9]   PointMixup: Augmentation for Point Clouds [J].
Chen, Yunlu ;
Hu, Vincent Tao ;
Gavves, Efstratios ;
Mensink, Thomas ;
Mettes, Pascal ;
Yang, Pengwan ;
Snoek, Cees G. M. .
COMPUTER VISION - ECCV 2020, PT III, 2020, 12348 :330-345
[10]  
Chih-Ling Chang, 2020, SPAI '20: Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence, P47, DOI 10.1145/3385003.3410920