OMNet: Learning Overlapping Mask for Partial-to-Partial Point Cloud Registration

被引:141
作者
Xu, Hao [1 ,2 ]
Liu, Shuaicheng [1 ,2 ]
Wang, Guangfu
Liu, Guanghui [1 ]
Zeng, Bing [1 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Peoples R China
[2] Megvii Technol, Beijing, Peoples R China
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV48922.2021.00312
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Point cloud registration is a key task in many computational fields. Previous correspondence matching based methods require the inputs to have distinctive geometric structures to fit a 3D rigid transformation according to point-wise sparse feature matches. However, the accuracy of transformation heavily relies on the quality of extracted features, which are prone to errors with respect to partiality and noise. In addition, they can not utilize the geometric knowledge of all the overlapping regions. On the other hand, previous global feature based approaches can utilize the entire point cloud for the registration, however they ignore the negative effect of non-overlapping points when aggregating global features. In this paper, we present OMNet, a global feature based iterative network for partial-to-partial point cloud registration. We learn overlapping masks to reject non-overlapping regions, which converts the partial-to-partial registration to the registration of the same shape. Moreover, the previously used data is sampled only once from the CAD models for each object, resulting in the same point clouds for the source and reference. We propose a more practical manner of data generation where a CAD model is sampled twice for the source and reference, avoiding the previously prevalent over-fitting issue. Experimental results show that our method achieves state-of-the-art performance compared to traditional and deep learning based methods. Code is available at https://github.com/megviiresearch/OMNet.
引用
收藏
页码:3112 / 3121
页数:10
相关论文
共 41 条
[1]  
[Anonymous], 2015, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2015.7298801
[2]  
Aoki Yasuhiro, 2019, P CVPR, P7163
[3]   A survey of augmented reality [J].
Azuma, RT .
PRESENCE-VIRTUAL AND AUGMENTED REALITY, 1997, 6 (04) :355-385
[4]   A METHOD FOR REGISTRATION OF 3-D SHAPES [J].
BESL, PJ ;
MCKAY, ND .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (02) :239-256
[5]   A survey of augmented reality [J].
Billinghurst, Mark ;
Clark, Adrian ;
Lee, Gun .
Foundations and Trends in Human-Computer Interaction, 2014, 8 (2-3) :73-272
[6]   Sparse Iterative Closest Point [J].
Bouaziz, Sofien ;
Tagliasacchi, Andrea ;
Pauly, Mark .
COMPUTER GRAPHICS FORUM, 2013, 32 (05) :113-123
[7]   Augmented reality technologies, systems and applications [J].
Carmigniani, Julie ;
Furht, Borko ;
Anisetti, Marco ;
Ceravolo, Paolo ;
Damiani, Ernesto ;
Ivkovic, Misa .
MULTIMEDIA TOOLS AND APPLICATIONS, 2011, 51 (01) :341-377
[8]  
Curless B., 1996, Computer Graphics Proceedings. SIGGRAPH '96, P303, DOI 10.1145/237170.237269
[9]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[10]   RANDOM SAMPLE CONSENSUS - A PARADIGM FOR MODEL-FITTING WITH APPLICATIONS TO IMAGE-ANALYSIS AND AUTOMATED CARTOGRAPHY [J].
FISCHLER, MA ;
BOLLES, RC .
COMMUNICATIONS OF THE ACM, 1981, 24 (06) :381-395