Point Set Voting for Partial Point Cloud Analysis

被引:21
作者
Zhang, Junming [1 ]
Chen, Weijia [2 ]
Wang, Yuping [1 ]
Vasudevan, Ram [3 ]
Johnson-Roberson, Matthew [4 ]
机构
[1] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
[2] Univ Michigan, Robot Program, Ann Arbor, MI 48109 USA
[3] Univ Michigan, Dept Mech Engn, Ann Arbor, MI 48109 USA
[4] Univ Michigan, Dept Naval Architecture & Marine Engn, Ann Arbor, MI 48109 USA
关键词
Three-dimensional displays; Training; Task analysis; Solid modeling; Analytical models; Shape; Decoding; Deep learning methods; NEURAL-NETWORKS;
D O I
10.1109/LRA.2020.3048658
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The continual improvement of 3D sensors has driven the development of algorithms to perform point cloud analysis. In fact, techniques for point cloud classification and segmentation have in recent years achieved incredible performance driven in part by leveraging large synthetic datasets. Unfortunately these same state-of-the-art approaches perform poorly when applied to incomplete point clouds. This limitation of existing algorithms is particularly concerning since point clouds generated by 3D sensors in the real world are usually incomplete due to perspective view or occlusion by other objects. This paper proposes a general model for partial point clouds analysis wherein the latent feature encoding a complete point cloud is inferred by applying a point set voting strategy. In particular, each local point set constructs a vote that corresponds to a distribution in the latent space, and the optimal latent feature is the one with the highest probability. This approach ensures that any subsequent point cloud analysis is robust to partial observation while simultaneously guaranteeing that the proposed model is able to output multiple possible results. This paper illustrates that this proposed method achieves the state-of-the-art performance on shape classification, part segmentation and point cloud completion.
引用
收藏
页码:596 / 603
页数:8
相关论文
共 40 条
  • [1] [Anonymous], 2014, ICLR
  • [2] GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES
    BALLARD, DH
    [J]. PATTERN RECOGNITION, 1981, 13 (02) : 111 - 122
  • [3] The 3D Hough Transform for Plane Detection in Point Clouds: A Review and a new Accumulator Design
    Borrmann, Dorit
    Elseberg, Jan
    Lingemann, Kai
    Nuechter, Andreas
    [J]. 3D RESEARCH, 2011, 2 (02): : 1 - 13
  • [4] Chang A. X., 2015, ARXIV
  • [5] Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis
    Dai, Angela
    Qi, Charles Ruizhongtai
    Niessner, Matthias
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6545 - 6554
  • [6] ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes
    Dai, Angela
    Chang, Angel X.
    Savva, Manolis
    Halber, Maciej
    Funkhouser, Thomas
    Niessner, Matthias
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2432 - 2443
  • [7] USE OF HOUGH TRANSFORMATION TO DETECT LINES AND CURVES IN PICTURES
    DUDA, RO
    HART, PE
    [J]. COMMUNICATIONS OF THE ACM, 1972, 15 (01) : 11 - &
  • [8] A Point Set Generation Network for 3D Object Reconstruction from a Single Image
    Fan, Haoqiang
    Su, Hao
    Guibas, Leonidas
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2463 - 2471
  • [9] GVCNN: Group-View Convolutional Neural Networks for 3D Shape Recognition
    Feng, Yifan
    Zhang, Zizhao
    Zhao, Xibin
    Ji, Rongrong
    Gao, Yue
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 264 - 272
  • [10] A Papier-Mache Approach to Learning 3D Surface Generation
    Groueix, Thibault
    Fisher, Matthew
    Kim, Vladimir G.
    Russell, Bryan C.
    Aubry, Mathieu
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 216 - 224