A Comprehensive Understanding of the Impact of Data Augmentation on the Transferability of 3D Adversarial Examples

被引:0
作者
Qian, Fulan [1 ,2 ]
Zou, Yuanjun [3 ]
Xu, Mengyao [3 ]
Zhang, Xuejun [3 ]
Zhang, Chonghao [3 ]
Xu, Chenchu [1 ]
Chen, Hai [3 ]
机构
[1] Anhui Univ, Artificial Intelligence Inst, Hefei, Peoples R China
[2] Anhui Univ, Informat Mat & Intelligent Sensing Lab Anhui Prov, Hefei, Peoples R China
[3] Anhui Univ, Sch Comp Sci & Technol, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
Data augmentation; transferability; adversarial examples; POINT CLOUDS; ATTACKS; ROBUSTNESS;
D O I
10.1145/3673232
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
3D point cloud classifiers exhibit vulnerability to imperceptible perturbations, which poses a serious threat to the security and reliability of deep learning models in practical applications, making the robustness evaluation of deep 3D point cloud models increasingly important. Due to the difficulty in obtaining model parameters, black-box attacks have become a mainstream means of assessing the adversarial robustness of 3D classification models. The core of improving the transferability of adversarial examples generated by black-box attacks is to generate better generalized adversarial examples, where data augmentation has become one of the popular approaches. In this article, we employ five mainstream attack methods and combine six data augmentation strategies, namely point dropping, flipping, rotating, scaling, shearing, and translating, in order to comprehensively explore the impact of these strategies on the transferability of adversarial examples. Our research reveals that data augmentation methods generally improve the transferability of the adversarial examples, and the effect is better when the methods are stacked. The interaction between data augmentation methods, model characteristics, attack, and defense strategies collectively determines the transferability of adversarial examples. In order to comprehensively understand and improve the effectiveness of adversarial examples, it is necessary to comprehensively consider these complex interrelationships.
引用
收藏
页数:41
相关论文
共 88 条
  • [1] Akhtar N., Mian A., Threat of adversarial attacks on deep learning in computer vision: A survey, IEEE Access, 6, pp. 14410-14430, (2018)
  • [2] Badue C., Guidolini R., Carneiro R.V., Azevedo P., Cardoso V.B., Forechi A., Jesus L., Berriel R., Paixao T.M., Mutz F., de Paula Veronese L., Oliveira-Santos T., De Souza A.F., Self-driving cars: A survey, Expert Systems with Applications, 165, (2021)
  • [3] Brendel W., Rauber J., Kummerer M., Ustyuzhaninov I., Bethge M., Accurate, reliable and fast robustness evaluation, Advances in Neural Information Processing Systems, 32, pp. 12841-12851, (2019)
  • [4] Buzhinsky I., Nerinovsky A., Tripakis S., Metrics and methods for robustness evaluation of neural networks with generative models, Machine Learning, 112, pp. 3977-4012, (2021)
  • [5] Carlini N., Athalye A., Papernot N., Brendel W., Rauber J., Tsipras D., Goodfellow I., Madry A., Kurakin A., On evaluating adversarial robustness, (2019)
  • [6] Carlini N., Wagner D., Towards evaluating the robustness of neural networks, Proceedings of the IEEE Symposium on Security and Privacy (SP), pp. 39-57, (2017)
  • [7] Chang C.-L., Hung J.-L., Tien C.-W., Tien C.-W., Kuo S.-Y., Evaluating robustness of AI models against adversarial attacks, Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence, pp. 47-54, (2020)
  • [8] Chen H., Qian F., Liu C., Zhang Y., Su H., Zhao S., Training robust deep collaborative filtering models via adversarial noise propagation, ACM Transactions on Information Systems, 42, 1, pp. 1-27, (2023)
  • [9] Chen Y., Hu V.T., Gavves E., Mensink T., Mettes P., Yang P., Snoek C.G.M., Pointmixup: Augmentation for point clouds, Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Part III, 16, pp. 330-345, (2020)
  • [10] Cortes C., Mohri M., Rostamizadeh A., L2 regularization for learning kernels, (2012)