PCR-CG: Point Cloud Registration via Deep Explicit Color and Geometry

被引:15
作者
Zhang, Yu [1 ]
Yu, Junle [2 ]
Huang, Xiaolin [1 ]
Zhou, Wenhui [2 ]
Hou, Ji [3 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Hangzhou Dianzi Univ, Hangzhou, Peoples R China
[3] TUM, Beijing, Peoples R China
来源
COMPUTER VISION, ECCV 2022, PT X | 2022年 / 13670卷
关键词
D O I
10.1007/978-3-031-20080-9_26
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we introduce PCR-CG: a novel 3D point cloud registration module explicitly embedding the color signals into geometry representation. Different from the previous SOTA methods that used only geometry representation, our module is specifically designed to effectively correlate color and geometry for the point cloud registration task. Our key contribution is a 2D-3D cross-modality learning algorithm that embeds the features learned from color signals to the geometry representation. With our designed 2D-3D projection module, the pixel features in a square region centered at correspondences perceived from images are effectively correlated with point cloud representations. In this way, the overlap regions can be inferred not only from point cloud but also from the texture appearances. Adding color is non-trivial. We compare against a variety of baselines designed for adding color to 3D, such as exhaustively adding per-pixel features or RGB values in an implicit manner. We leverage Predator as our baseline method and incorporate our module into it. Our experimental results indicate a significant improvement on the 3DLoMatch benchmark. With the help of our module, we achieve a significant improvement of 6.5% registration recall with 5000 sampled points over our baseline method. To validate the effectiveness of 2D features on 3D, we ablate different 2D pre-trained networks and show a positive correlation between the pre-trained weights and task performance. Our study reveals a significant advantage of correlating explicit deep color features to the point cloud in the registration task.
引用
收藏
页码:443 / 459
页数:17
相关论文
共 47 条
[1]  
[Anonymous], 2014, ISRROBOTIK
[2]   SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration [J].
Ao, Sheng ;
Hu, Qingyong ;
Yang, Bo ;
Markham, Andrew ;
Guo, Yulan .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :11748-11757
[3]   PointNetLK: Robust & Efficient Point Cloud Registration using PointNet [J].
Aoki, Yasuhiro ;
Goforth, Hunter ;
Srivatsan, Rangaprasad Arun ;
Lucey, Simon .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7156-7165
[4]   3D Semantic Parsing of Large-Scale Indoor Spaces [J].
Armeni, Iro ;
Sener, Ozan ;
Zamir, Amir R. ;
Jiang, Helen ;
Brilakis, Ioannis ;
Fischer, Martin ;
Savarese, Silvio .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1534-1543
[5]   LEAST-SQUARES FITTING OF 2 3-D POINT SETS [J].
ARUN, KS ;
HUANG, TS ;
BLOSTEIN, SD .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1987, 9 (05) :699-700
[6]   PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency [J].
Bai, Xuyang ;
Luo, Zixin ;
Zhou, Lei ;
Chen, Hongkai ;
Li, Lei ;
Hu, Zeyu ;
Fu, Hongbo ;
Tai, Chiew-Lan .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15854-15864
[7]   D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features [J].
Bai, Xuyang ;
Luo, Zixin ;
Zhou, Lei ;
Fu, Hongbo ;
Quan, Long ;
Tai, Chiew-Lan .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6358-6366
[8]   Pose Guided RGBD Feature Learning for 3D Object Pose Estimation [J].
Balntas, Vassileios ;
Doumanoglou, Andreas ;
Sahin, Caner ;
Sock, Juil ;
Kouskouridas, Rigas ;
Kim, Tae-Kyun .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3876-3884
[9]   A METHOD FOR REGISTRATION OF 3-D SHAPES [J].
BESL, PJ ;
MCKAY, ND .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (02) :239-256
[10]  
Chang AE, 2017, Arxiv, DOI arXiv:1709.06158