DAMF-Net: Unsupervised Domain-Adaptive Multimodal Feature Fusion Method for Partial Point Cloud Registration

被引:0
作者
Zhao, Haixia [1 ]
Sun, Jiaqi [1 ]
Dong, Bin [2 ]
机构
[1] Xi An Jiao Tong Univ, Sch Math & Stat, Xian 710049, Peoples R China
[2] Shaanxi Darerobot Intelligent Technol Co Ltd, Xian 710075, Peoples R China
基金
国家重点研发计划;
关键词
point cloud registration; multimodal feature fusion; domain adaptation; deep learning; industrial scene;
D O I
10.3390/rs16111993
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Current point cloud registration methods predominantly focus on extracting geometric information from point clouds. In certain scenarios, i.e., when the target objects to be registered contain a large number of repetitive planar structures, the point-only based methods struggle to extract distinctive features from the similar structures, which greatly limits the accuracy of registration. Moreover, the deep learning-based approaches achieve commendable results on public datasets, but they face challenges in generalizing to unseen few-shot datasets with significant domain differences from the training data, and that is especially common in industrial applications where samples are generally scarce. Moreover, existing registration methods can achieve high accuracy on complete point clouds. However, for partial point cloud registration, many methods are incapable of accurately identifying correspondences, making it challenging to estimate precise rigid transformations. This paper introduces a domain-adaptive multimodal feature fusion method for partial point cloud registration in an unsupervised manner, named DAMF-Net, that significantly addresses registration challenges in scenes dominated by repetitive planar structures, and it can generalize well-trained networks on public datasets to unseen few-shot datasets. Specifically, we first introduce a point-guided two-stage multimodal feature fusion module that utilizes the geometric information contained in point clouds to guide the texture information in images for preliminary and supplementary feature fusion. Secondly, we incorporate a gradient-inverse domain-aware module to construct a domain classifier in a generative adversarial manner, weakening the feature extractor's ability to distinguish between source and target domain samples, thereby achieving generalization across different domains. Experiments on a public dataset and our industrial components dataset demonstrate that our method improves the registration accuracy in specific scenarios with numerous repetitive planar structures and achieves high accuracy on unseen few-shot datasets, compared with the results of state-of-the-art traditional and deep learning-based point cloud registration methods.
引用
收藏
页数:24
相关论文
共 45 条
[1]   CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding [J].
Afham, Mohamed ;
Dissanayake, Isuru ;
Dissanayake, Dinithi ;
Dharmasiri, Amaya ;
Thilakarathna, Kanchana ;
Rodrigo, Ranga .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :9892-9902
[2]   PointNetLK: Robust & Efficient Point Cloud Registration using PointNet [J].
Aoki, Yasuhiro ;
Goforth, Hunter ;
Srivatsan, Rangaprasad Arun ;
Lucey, Simon .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7156-7165
[3]  
Ba J, 2014, ACS SYM SER
[4]   A METHOD FOR REGISTRATION OF 3-D SHAPES [J].
BESL, PJ ;
MCKAY, ND .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (02) :239-256
[5]  
Bińkowski M, 2021, Arxiv, DOI arXiv:1801.01401
[6]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
[7]  
Eckart C., 1989, Math. Control. Signals Syst, V2, P303
[8]   RANDOM SAMPLE CONSENSUS - A PARADIGM FOR MODEL-FITTING WITH APPLICATIONS TO IMAGE-ANALYSIS AND AUTOMATED CARTOGRAPHY [J].
FISCHLER, MA ;
BOLLES, RC .
COMMUNICATIONS OF THE ACM, 1981, 24 (06) :381-395
[9]  
Ganin Y, 2016, J MACH LEARN RES, V17
[10]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672