GRNet: Gridding Residual Network for Dense Point Cloud Completion

被引:265
作者
Xie, Haozhe [1 ,2 ,3 ]
Yao, Hongxun [1 ,2 ]
Zhou, Shangchen [4 ]
Mao, Jiageng [5 ]
Zhang, Shengping [2 ,6 ]
Sun, Wenxiu [7 ]
机构
[1] Harbin Inst Technol, State Key Lab Robot & Syst, Harbin, Peoples R China
[2] Harbin Inst Technol, Fac Comp, Harbin, Peoples R China
[3] SenseTime Res, Shenzhen, Peoples R China
[4] Nanyang Technol Univ, Singapore, Singapore
[5] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[6] Peng Cheng Lab, Shenzhen, Peoples R China
[7] SenseTime Res, Hong Kong, Peoples R China
来源
COMPUTER VISION - ECCV 2020, PT IX | 2020年 / 12354卷
基金
中国国家自然科学基金;
关键词
Point cloud completion; Gridding; Cubic feature sampling;
D O I
10.1007/978-3-030-58545-7_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications. Mainstream methods (e.g., PCN and TopNet) use Multi-layer Perceptrons (MLPs) to directly process point clouds, which may cause the loss of details because the structural and context of point clouds are not fully considered. To solve this problem, we introduce 3D grids as intermediate representations to regularize unordered point clouds and propose a novel Gridding Residual Network (GRNet) for point cloud completion. In particular, we devise two novel differentiable layers, named Gridding and Gridding Reverse, to convert between point clouds and 3D grids without losing structural information. We also present the differentiable Cubic Feature Sampling layer to extract features of neighboring points, which preserves context information. In addition, we design a new loss function, namely Gridding Loss, to calculate the L1 distance between the 3D grids of the predicted and ground truth point clouds, which is helpful to recover details. Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
引用
收藏
页码:365 / 381
页数:17
相关论文
共 52 条
[1]  
Achlioptas P, 2018, PR MACH LEARN RES, V80
[2]   Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age [J].
Cadena, Cesar ;
Carlone, Luca ;
Carrillo, Henry ;
Latif, Yasir ;
Scaramuzza, Davide ;
Neira, Jose ;
Reid, Ian ;
Leonard, John J. .
IEEE TRANSACTIONS ON ROBOTICS, 2016, 32 (06) :1309-1332
[3]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[4]   A Point Set Generation Network for 3D Object Reconstruction from a Single Image [J].
Fan, Haoqiang ;
Su, Hao ;
Guibas, Leonidas .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2463-2471
[5]   Vision meets robotics: The KITTI dataset [J].
Geiger, A. ;
Lenz, P. ;
Stiller, C. ;
Urtasun, R. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1231-1237
[6]   A Papier-Mache Approach to Learning 3D Surface Generation [J].
Groueix, Thibault ;
Fisher, Matthew ;
Kim, Vladimir G. ;
Russell, Bryan C. ;
Aubry, Mathieu .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :216-224
[7]   High-Resolution Shape Completion Using Deep Neural Networks for Global Structure and Local Geometry Inference [J].
Han, Xiaoguang ;
Li, Zhen ;
Huang, Haibin ;
Kalogerakis, Evangelos ;
Yu, Yizhou .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :85-93
[8]   Unsupervised Multi-Task Feature Learning on Point Clouds [J].
Hassani, Kaveh ;
Haley, Mike .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :8159-8170
[9]  
Hermosilla P, 2018, ACM T GRAPHIC, V37, DOI 10.1145/3272127.3275110
[10]  
Hua B., 2018, CVPR 2018