MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior

被引:1
|
作者
Zhang, Guangyun [1 ]
Zhang, Rongting [1 ]
机构
[1] Nanjing Tech Univ, Sch Geomat Sci & Technol, Nanjing 211800, Peoples R China
基金
中国国家自然科学基金;
关键词
3D real scene; urban 3D mesh; semantic segmentation; sparse prior; low intrinsic dimension; convolutional neural network;
D O I
10.3390/rs15225324
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
A textured urban 3D mesh is an important part of 3D real scene technology. Semantically segmenting an urban 3D mesh is a key task in the photogrammetry and remote sensing field. However, due to the irregular structure of a 3D mesh and redundant texture information, it is a challenging issue to obtain high and robust semantic segmentation results for an urban 3D mesh. To address this issue, we propose a semantic urban 3D mesh segmentation network (MeshNet) with sparse prior (SP), named MeshNet-SP. MeshNet-SP consists of a differentiable sparse coding (DSC) subnetwork and a semantic feature extraction (SFE) subnetwork. The DSC subnetwork learns low-intrinsic-dimensional features from raw texture information, which increases the effectiveness and robustness of semantic urban 3D mesh segmentation. The SFE subnetwork produces high-level semantic features from the combination of features containing the geometric features of a mesh and the low-intrinsic-dimensional features of texture information. The proposed method is evaluated on the SUM dataset. The results of ablation experiments demonstrate that the low-intrinsic-dimensional feature is the key to achieving high and robust semantic segmentation results. The comparison results show that the proposed method can achieve competitive accuracies, and the maximum increase can reach 34.5%, 35.4%, and 31.8% in mR, mF1, and mIoU, respectively.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] 3D video semantic segmentation for wildfire smoke
    Guodong Zhu
    Zhenxue Chen
    Chengyun Liu
    Xuewen Rong
    Weikai He
    Machine Vision and Applications, 2020, 31
  • [22] A Prior Level Fusion Approach for the Semantic Segmentation of 3D Point Clouds Using Deep Learning
    Ballouch, Zouhair
    Hajji, Rafika
    Poux, Florent
    Kharroubi, Abderrazzaq
    Billen, Roland
    REMOTE SENSING, 2022, 14 (14)
  • [23] Joint 2D and 3D Semantic Segmentation with Consistent Instance Semantic
    Wan, Yingcai
    Fang, Lijin
    IEICE TRANSACTIONS ON COMMUNICATIONS, 2024, E107A (08) : 1309 - 1318
  • [24] Deep Hierarchical Learning for 3D Semantic Segmentation
    Li, Chongshou
    Liu, Yuheng
    Li, Xinke
    Zhang, Yuning
    Li, Tianrui
    Yuan, Junsong
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, : 4420 - 4441
  • [25] A 54.7 fps 3D Point Cloud Semantic Segmentation Processor with Sparse Grouping based Dilated Graph Convolutional Network for Mobile Devices
    Kim, Sangjin
    Kim, Sangyeob
    Lee, Juhyoung
    Yoo, Hoi-Jun
    2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,
  • [26] VISUAL LOCALIZATION USING SPARSE SEMANTIC 3D MAP
    Shi, Tianxin
    Shen, Shuhan
    Gao, Xiang
    Zhu, Lingjie
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 315 - 319
  • [27] EGNet: 3D Semantic Segmentation Through Point–Voxel–Mesh Data for Euclidean–Geodesic Feature Fusion
    Li, Qi
    Song, Yu
    Jin, Xiaoqian
    Wu, Yan
    Zhang, Hang
    Zhao, Di
    Sensors, 2024, 24 (24)
  • [28] Semantic segmentation of 3D point cloud based on contextual attention CNN
    Yang J.
    Dang J.
    Tongxin Xuebao/Journal on Communications, 2020, 41 (07): : 195 - 203
  • [29] U-shaped network based on Transformer for 3D point clouds semantic segmentation
    Zhang, Jiazhe
    Li, Xingwei
    Zhao, Xianfa
    Ge, Yizhi
    Zhang, Zheng
    2021 THE 5TH INTERNATIONAL CONFERENCE ON VIDEO AND IMAGE PROCESSING, ICVIP 2021, 2021, : 170 - 176
  • [30] VPA-Net: A visual perception assistance network for 3d lidar semantic segmentation
    Lin, Fangfang
    Lin, Tianliang
    Yao, Yu
    Ren, Haoling
    Wu, Jiangdong
    Cai, Qipeng
    PATTERN RECOGNITION, 2025, 158