Imperceptible Transfer Attack and Defense on 3D Point Cloud Classification

被引:17
作者
Liu, Daizong [1 ]
Hu, Wei [1 ]
机构
[1] Peking Univ, Wangxuan Inst Comp Technol, Beijing 100871, Peoples R China
基金
中国国家自然科学基金;
关键词
Point cloud compression; Three-dimensional displays; Solid modeling; Perturbation methods; Data models; Distortion; Atmospheric modeling; 3D point cloud attack; imperceptibility; transferability; adversarial examples; defense on adversarial attacks;
D O I
10.1109/TPAMI.2022.3193449
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although many efforts have been made into attack and defense on the 2D image domain in recent years, few methods explore the vulnerability of 3D models. Existing 3D attackers generally perform point-wise perturbation over point clouds, resulting in deformed structures or outliers, which is easily perceivable by humans. Moreover, their adversarial examples are generated under the white-box setting, which frequently suffers from low success rates when transferred to attack remote black-box models. In this article, we study 3D point cloud attacks from two new and challenging perspectives by proposing a novel Imperceptible Transfer Attack (ITA): 1) Imperceptibility: we constrain the perturbation direction of each point along its normal vector of the neighborhood surface, leading to generated examples with similar geometric properties and thus enhancing the imperceptibility. 2) Transferability: we develop an adversarial transformation model to generate the most harmful distortions and enforce the adversarial examples to resist it, improving their transferability to unknown black-box models. Further, we propose to train more robust black-box 3D models to defend against such ITA attacks by learning more discriminative point cloud representations. Extensive evaluations demonstrate that our ITA attack is more imperceptible and transferable than state-of-the-arts and validate the superiority of our defense strategy.
引用
收藏
页码:4727 / 4746
页数:20
相关论文
共 78 条
  • [1] Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms
    Bhagoji, Arjun Nitin
    He, Warren
    Li, Bo
    Song, Dawn
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 158 - 174
  • [2] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [3] Chao YH, 2015, 2015 PICTURE CODING SYMPOSIUM (PCS) WITH 2015 PACKET VIDEO WORKSHOP (PV), P60, DOI 10.1109/PCS.2015.7170047
  • [4] Multi-View 3D Object Detection Network for Autonomous Driving
    Chen, Xiaozhi
    Ma, Huimin
    Wan, Ji
    Li, Bo
    Xia, Tian
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6526 - 6534
  • [5] Cohen J, 2019, PR MACH LEARN RES, V97
  • [6] Cortes C, 2012, Arxiv, DOI arXiv:1205.2653
  • [7] Dong X., 2020, PROC IEEE C COMPUT V, p11 513
  • [8] Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
    Dong, Yinpeng
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4307 - 4316
  • [9] Boosting Adversarial Attacks with Momentum
    Dong, Yinpeng
    Liao, Fangzhou
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    Hu, Xiaolin
    Li, Jianguo
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 9185 - 9193
  • [10] A Point Set Generation Network for 3D Object Reconstruction from a Single Image
    Fan, Haoqiang
    Su, Hao
    Guibas, Leonidas
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2463 - 2471