Image-to-Lidar Relational Distillation for Autonomous Driving Data

被引:0
作者
Mahmoud, Anas [1 ]
Harakeh, Ali [2 ]
Waslander, Steven [1 ]
机构
[1] Univ Toronto, Toronto, ON, Canada
[2] Mila Quebec AI Inst, Montreal, PQ, Canada
来源
COMPUTER VISION - ECCV 2024, PT LXII | 2025年 / 15120卷
关键词
D O I
10.1007/978-3-031-73033-7_26
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pre-trained on extensive and diversemulti-modal datasets, 2D foundation models excel at addressing 2D tasks with little or no downstream supervision, owing to their robust representations. The emergence of 2D-to-3D distillation frameworks has extended these capabilities to 3D models. However, distilling 3D representations for autonomous driving datasets presents challenges like self-similarity, class imbalance, and point cloud sparsity, hindering the effectiveness of contrastive distillation, especially in zero-shot learning contexts. Whereas other methodologies, such as similarity-based distillation, enhance zero-shot performance, they tend to yield less discriminative representations, diminishing few-shot performance. We investigate the gap in structure between the 2Dand the 3D representations that result from state-of-the-art distillation frameworks and reveal a significant mismatch between the two. Additionally, we demonstrate that the observed structural gap is negatively correlated with the efficacy of the distilled representations on zero-shot and few-shot 3D semantic segmentation. To bridge this gap, we propose a relational distillation framework enforcing intra-modal and cross-modal constraints, resulting in distilled 3D representations that closely capture the structure of the 2D representation. This alignment significantly enhances 3D representation performance over those learned through contrastive distillation in zero-shot segmentation tasks. Furthermore, our relational loss consistently improves the quality of 3D representations in both in-distribution and out-of-distribution few-shot segmentation tasks, outperforming approaches that rely on the similarity loss.
引用
收藏
页码:459 / 475
页数:17
相关论文
共 50 条
[41]   Exploring Adversarial Robustness of LiDAR Semantic Segmentation in Autonomous Driving [J].
Mahima, K. T. Yasas ;
Perera, Asanka ;
Anavatti, Sreenatha ;
Garratt, Matt .
SENSORS, 2023, 23 (23)
[42]   Future pseudo-LiDAR frame prediction for autonomous driving [J].
Xudong Huang ;
Chunyu Lin ;
Haojie Liu ;
Lang Nie ;
Yao Zhao .
Multimedia Systems, 2022, 28 :1611-1620
[43]   LiDAR-Based Place Recognition For Autonomous Driving: A Survey [J].
Zhang, Yongjun ;
Shi, Pengcheng ;
Li, Jiayuan .
ACM COMPUTING SURVEYS, 2025, 57 (04)
[44]   Dynamic object detection using sparse LiDAR data for autonomous machine driving and road safety applications [J].
Gupta, Akshay ;
Jain, Shreyansh ;
Choudhary, Pushpa ;
Parida, Manoranjan .
EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255
[45]   Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving [J].
Marcuzzi, Rodrigo ;
Nunes, Lucas ;
Wiesmann, Louis ;
Behley, Jens ;
Stachniss, Cyrill .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (02) :1141-1148
[46]   Future pseudo-LiDAR frame prediction for autonomous driving [J].
Huang, Xudong ;
Lin, Chunyu ;
Liu, Haojie ;
Nie, Lang ;
Zhao, Yao .
MULTIMEDIA SYSTEMS, 2022, 28 (05) :1611-1620
[47]   Military-Standard Lidar Driving Advances in Autonomous Vehicles [J].
Boudreau, Denis .
PHOTONICS SPECTRA, 2017, 51 (09) :28-31
[48]   Fast and Lite Point Cloud Semantic Segmentation for Autonomous Driving Utilizing LiDAR Synthetic Training Data [J].
Jeong, Jimin ;
Song, Hamin ;
Park, Jaehyun ;
Resende, Paulo ;
Bradai, Benazouz ;
Jo, Kichun .
IEEE ACCESS, 2022, 10 :78899-78909
[49]   The Research of 3D Point Cloud Data Clustering Based on MEMS Lidar for Autonomous Driving [J].
Yang, Weikang ;
Dong, Siwei ;
Li, Dagang .
INTERNATIONAL JOURNAL OF AUTOMOTIVE TECHNOLOGY, 2024, 25 (05) :1251-1262
[50]   Evaluation of Point Cloud Data Augmentation for 3D-LiDAR Object Detection in Autonomous Driving [J].
Martins, Marta ;
Gomes, Iago P. ;
Wolf, Denis Fernando ;
Premebida, Cristiano .
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE ADVANCES IN ROBOTICS, VOL 1, 2024, 976 :82-92