Curriculum-Guided Adversarial Learning for Enhanced Robustness in 3D Object Detection

被引:0
作者
Huang, Jinzhe [1 ]
Xie, Yiyuan [2 ]
Chen, Zhuang [2 ]
Su, Ye [2 ]
机构
[1] Chongqing Normal Univ, Coll Comp & Informat Sci, Chongqing 401331, Peoples R China
[2] Southwest Univ, Coll Elect & Informat Engn, Chongqing 400715, Peoples R China
基金
中国国家自然科学基金;
关键词
3D object detection; adversarial learning; LiDAR; PointPillars;
D O I
10.3390/s25061697
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The pursuit of robust 3D object detection has emerged as a critical focus within the realm of computer vision. This paper presents a curriculum-guided adversarial learning (CGAL) framework, which significantly enhances the adversarial robustness and detection accuracy of the LiDAR-based 3D object detector PointPillars. By employing adversarial learning with prior curriculum expertise, this framework effectively resists adversarial perturbations generated by a novel attack method, P-FGSM, on 3D point clouds. By masterfully constructing a nonlinear enhancement block (NEB) based on the radial basis function network for PointPillars to adapt to the CGAL, a novel 3D object detector named Pillar-RBFN was developed; it exhibits intrinsic adversarial robustness without undergoing adversarial training. In order to tackle the class imbalance issue within the KITTI dataset, a data augmentation technique has been designed that singly samples the point cloud with additional ground truth objects frame by frame (SFGTS), resulting in the creation of an adversarial version of the original KITTI dataset named Adv-KITTI. Moreover, to further alleviate this issue, an adaptive variant of focal loss was formulated, effectively directing the model's attention to challenging objects during the training process. Extensive experiments demonstrate that the proposed CGAL achieves an improvement of 0.8 similar to 2.5 percentage points in mean average precision (mAP) compared to conventional training methods, and the models trained with Adv-KITTI have shown an enhancement of at least 15 percentage points in mAP, compellingly testifying to the effectiveness of our method.
引用
收藏
页数:31
相关论文
共 60 条
[1]   Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection [J].
Abdelfattah, Mazen ;
Yuan, Kaiwen ;
Wang, Z. Jane ;
Ward, Rabab .
2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, :2189-2194
[2]  
[Anonymous], 2009, Proceedings of the 26th Annual International Conference on Machine Learning, DOI [DOI 10.1145/1553374.1553380, 10.1145/1553374.155338]
[3]   SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences [J].
Behley, Jens ;
Garbade, Martin ;
Milioto, Andres ;
Quenzel, Jan ;
Behnke, Sven ;
Stachniss, Cyrill ;
Gall, Juergen .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9296-9306
[4]  
Brendel W., 2017, P INT C LEARN REPR I
[5]  
Brinatti Vazquez G.D., 2024, J. Opt. Photonics Res, DOI [10.47852/bonviewJOPR42022350, DOI 10.47852/BONVIEWJOPR42022350]
[6]  
Caesar H, 2020, PROC CVPR IEEE, P11618, DOI 10.1109/CVPR42600.2020.01164
[7]   Unsupervised Anomaly Detection for Improving Adversarial Robustness of 3D Object Detection Models [J].
Cai, Mumuxin ;
Wang, Xupeng ;
Sohel, Ferdous ;
Lei, Hang .
ELECTRONICS, 2025, 14 (02)
[8]   Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving [J].
Cao, Yulong ;
Xiao, Chaowei ;
Cyr, Benjamin ;
Zhou, Yimeng ;
Park, Won ;
Rampazzi, Sara ;
Chen, Qi Alfred ;
Fu, Kevin ;
Mao, Z. Morley .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :2267-2281
[9]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[10]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448