A Review of panoptic segmentation for mobile mapping point clouds

被引:4
作者
Xiang, Binbin [1 ]
Yue, Yuanwen [1 ]
Peters, Torben [1 ]
Schindler, Konrad [1 ]
机构
[1] Swiss Fed Inst Technol, Photogrammetry & Remote Sensing, CH-8093 Zurich, Switzerland
关键词
Mobile mapping point clouds; 3D panoptic segmentation; 3D semantic segmentation; 3D instance segmentation; 3D deep learning backbones; CLASSIFICATION; NETWORKS; DATASET;
D O I
10.1016/j.isprsjprs.2023.08.008
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
3D point cloud panoptic segmentation is the combined task to (i) assign each point to a semantic class and (ii) separate the points in each class into object instances. Recently there has been an increased interest in such comprehensive 3D scene understanding, building on the rapid advances of semantic segmentation due to the advent of deep 3D neural networks. Yet, to date there is very little work about panoptic segmentation of outdoor mobile-mapping data, and no systematic comparisons. The present paper tries to close that gap. It reviews the building blocks needed to assemble a panoptic segmentation pipeline and the related literature. Moreover, a modular pipeline is set up to perform comprehensive, systematic experiments to assess the state of panoptic segmentation in the context of street mapping. As a byproduct, we also provide the first public dataset for that task, by extending the NPM3D dataset to include instance labels. That dataset and our source code are publicly available.1We discuss which adaptations are need to adapt current panoptic segmentation methods to outdoor scenes and large objects. Our study finds that for mobile mapping data, KPConv performs best but is slower, while PointNet++ is fastest but performs significantly worse. Sparse CNNs are in between. Regardless of the backbone, instance segmentation by clustering embedding features is better than using shifted coordinates.
引用
收藏
页码:373 / 391
页数:19
相关论文
共 103 条
  • [81] Graph Attention Convolution for Point Cloud Semantic Segmentation
    Wang, Lei
    Huang, Yuchun
    Hou, Yaolin
    Zhang, Shenman
    Shan, Jie
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10288 - 10297
  • [82] O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis
    Wang, Peng-Shuai
    Liu, Yang
    Guo, Yu-Xiao
    Sun, Chun-Yu
    Tong, Xin
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (04):
  • [83] SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation
    Wang, Weiyue
    Yu, Ronald
    Huang, Qiangui
    Neumann, Ulrich
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2569 - 2578
  • [84] SOLO: A Simple Framework for Instance Segmentation
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    Li, Lei
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) : 8587 - 8601
  • [85] Associatively Segmenting Instances and Semantics in Point Clouds
    Wang, Xinlong
    Liu, Shu
    Shen, Xiaoyong
    Shen, Chunhua
    Jia, Jiaya
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4091 - 4100
  • [86] Hierarchical Instance Recognition of Individual Roadside Trees in Environmentally Complex Urban Areas from UAV Laser Scanning Point Clouds
    Wang, Yongjun
    Jiang, Tengping
    Liu, Jing
    Li, Xiaorui
    Liang, Chong
    [J]. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2020, 9 (10)
  • [87] Dynamic Graph CNN for Learning on Point Clouds
    Wang, Yue
    Sun, Yongbin
    Liu, Ziwei
    Sarma, Sanjay E.
    Bronstein, Michael M.
    Solomon, Justin M.
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2019, 38 (05):
  • [88] Wu BC, 2019, IEEE INT CONF ROBOT, P4376, DOI [10.1109/icra.2019.8793495, 10.1109/ICRA.2019.8793495]
  • [89] Wu BC, 2018, IEEE INT CONF ROBOT, P1887
  • [90] Deep 3D Object Detection Networks Using LiDAR Data: A Review
    Wu, Yutian
    Wang, Yueyu
    Zhang, Shuwei
    Ogai, Harutoshi
    [J]. IEEE SENSORS JOURNAL, 2021, 21 (02) : 1152 - 1171