Boosting Cross-Domain Point Classification via Distilling Relational Priors From 2D Transformers

被引:0
作者
Zou, Longkun [1 ,2 ]
Zhu, Wanru [1 ]
Chen, Ke [2 ]
Guo, Lihua [1 ]
Guo, Kailing [1 ]
Jia, Kui [3 ]
Wang, Yaowei [2 ]
机构
[1] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou 510641, Peoples R China
[2] Pengcheng Lab, Shenzhen 518000, Peoples R China
[3] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen CUHK Shenzhen, Shenzhen 518000, Peoples R China
关键词
Point cloud compression; Three-dimensional displays; Transformers; Solid modeling; Training; Task analysis; Shape; Unsupervised domain adaptation; point clouds; relational priors; cross-modal; knowledge distillation;
D O I
10.1109/TCSVT.2024.3440517
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Semantic pattern of an object point cloud is determined by its topological configuration of local geometries. Learning discriminative representations can be challenging due to large shape variations of point sets in local regions and incomplete surface in a global perspective, which can be made even more severe in the context of unsupervised domain adaptation (UDA). In specific, traditional 3D networks mainly focus on local geometric details and ignore the topological structure between local geometries, which greatly limits their cross-domain generalization. Recently, the transformer-based models have achieved impressive performance gain in a range of image-based tasks, benefiting from its strong generalization capability and scalability stemming from capturing long range correlation across local patches. Inspired by such successes of visual transformers, we propose a novel Relational Priors Distillation (RPD) method to extract relational priors from the well-trained transformers on massive images, which can significantly empower cross-domain representations with consistent topological priors of objects. To this end, we establish a parameter-frozen pre-trained transformer module shared between 2D teacher and 3D student models, complemented by an online knowledge distillation strategy for semantically regularizing the 3D student model. Furthermore, we introduce a novel self-supervised task centered on reconstructing masked point cloud patches using corresponding masked multi-view image features, thereby empowering the model with incorporating 3D geometric information. Experiments on the PointDA-10 and the Sim-to-Real datasets verify that the proposed method consistently achieves the state-of-the-art performance of UDA for point cloud classification. The source code of this work is available at https://github.com/zou-longkun/RPD.git.
引用
收藏
页码:12963 / 12976
页数:14
相关论文
共 83 条
[41]  
Li YY, 2018, ADV NEUR IN, V31
[42]  
Li ZZ, 2022, ADV NEUR IN
[43]   Microsoft COCO: Common Objects in Context [J].
Lin, Tsung-Yi ;
Maire, Michael ;
Belongie, Serge ;
Hays, James ;
Perona, Pietro ;
Ramanan, Deva ;
Dollar, Piotr ;
Zitnick, C. Lawrence .
COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 :740-755
[44]  
Liu Hong, 2021, ADV NEURAL INFORM PR, V34
[45]   3D-to-2D Distillation for Indoor Scene Parsing [J].
Liu, Zhengzhe ;
Qi, Xiaojuan ;
Fu, Chi-Wing .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :4462-4472
[46]  
Luo XY, 2021, Arxiv, DOI arXiv:2104.05164
[47]  
Ma X, 2022, Arxiv, DOI [arXiv:2202.07123, DOI 10.48550/ARXIV.2202.07123]
[48]  
Maturana D, 2015, IEEE INT C INT ROBOT, P922, DOI 10.1109/IROS.2015.7353481
[49]   Automatic Loss Function Search for Adversarial Unsupervised Domain Adaptation [J].
Mei, Zhen ;
Ye, Peng ;
Ye, Hancheng ;
Li, Baopu ;
Guo, Jinyang ;
Chen, Tao ;
Ouyang, Wanli .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (10) :5868-5881
[50]   Multi-View Vision Fusion Network: Can 2D Pre-Trained Model Boost 3D Point Cloud Data-Scarce Learning? [J].
Peng, Haoyang ;
Li, Baopu ;
Zhang, Bo ;
Chen, Xin ;
Chen, Tao ;
Zhu, Hongyuan .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) :5951-5962