Camdar-adv: Generating adversarial patches on 3D object

被引:45
作者
Chen, Chang [1 ,2 ]
Huang, Teng [1 ,2 ]
机构
[1] Guangzhou Univ, Inst Artificial Intelligence & Blockchain, Guangzhou 511363, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial example; autonmous driving; geometric transformation;
D O I
10.1002/int.22349
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural network model is the core technology for sensors of the autonomous driving platform to perceive the external environment. Recent research have shown that it has a certain vulnerability. The artificial designed adversarial examples can make the DNN model output the wrong results. These adversarial examples not only exist in the digital world, but also in the physical world. At present, research on autonomous driving platform mainly focus on attacking a single sensor. In this paper, we introduce Camdar-adv, a method for generating image adversarial examples on three-dimensional (3D) objects, which could potentially lunch a multisensor attack toward the autonomous driving platforms. Specifically, with objects that can attack LiDAR sensors, a geometric transformation can be used to project their shape onto the two-dimensional plane. Adversarial perturbations against optical image sensor could be added to the surface of the adversarial 3D objects precisely without changing its geometry. Test results on the open-source autonomous driving data set KITTI show that Camdar-adv can generate adversarial samples for the state of the art object detection model. From a fixed viewpoint, our method can achieve an attack success rate over 99%.
引用
收藏
页码:1441 / 1453
页数:13
相关论文
共 29 条
  • [1] GenAttack: Practical Black-box Attacks with Gradient-Free Optimization
    Alzantot, Moustafa
    Sharma, Yash
    Chakraborty, Supriyo
    Zhang, Huan
    Hsieh, Cho-Jui
    Srivastava, Mani B.
    [J]. PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'19), 2019, : 1111 - 1119
  • [2] Athalye A, 2018, PR MACH LEARN RES, V80
  • [3] Athalye A, 2018, PR MACH LEARN RES, V80
  • [4] Unifying Knowledge Graph Learning and Recommendation: Towards a Better Understanding of User Preferences
    Cao, Yixin
    Wang, Xiang
    He, Xiangnan
    Hu, Zikun
    Chua, Tat-Seng
    [J]. WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, : 151 - 161
  • [5] Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
    Cao, Yulong
    Xiao, Chaowei
    Cyr, Benjamin
    Zhou, Yimeng
    Park, Won
    Rampazzi, Sara
    Chen, Qi Alfred
    Fu, Kevin
    Mao, Z. Morley
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 2267 - 2281
  • [6] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [7] Chen P.Y., 2017, P 10 ACM WORKSHOP AR, P15, DOI 10.1145/3128572.3140448
  • [8] Cignoni Paolo, 2008, EUR IT CHAPT C, DOI [10.2312/LocalChapterEvents/ItalChap, 10.2312/localchapterevents/, DOI 10.2312/LOCALCHAPTEREVENTS]
  • [9] Du R., 2020, P IEEE CVF C COMP VI, P13716, DOI DOI 10.1109/CVPR42600.2020.01373
  • [10] Geiger Andreas, 2012, P IEEE C COMP VIS PA