Improving Semantic Image Segmentation via Label Fusion in Semantically Textured Meshes

被引:0
|
作者
Fervers, Florian [1 ]
Breuer, Timo [1 ]
Stachowiak, Gregor [1 ]
Bullinger, Sebastian [1 ]
Bodensteiner, Christoph [1 ]
Arens, Michael [1 ]
机构
[1] Fraunhofer IOSB, D-76275 Ettlingen, Germany
来源
PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5 | 2022年
关键词
Semantic Segmentation; Mesh Reconstruction; Label Fusion;
D O I
10.5220/0010841800003124
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Models for semantic segmentation require a large amount of hand-labeled training data which is costly and time-consuming to produce. For this purpose, we present a label fusion framework that is capable of improving semantic pixel labels of video sequences in an unsupervised manner. We make use of a 3D mesh representation of the environment and fuse the predictions of different frames into a consistent representation using semantic mesh textures. Rendering the semantic mesh using the original intrinsic and extrinsic camera parameters yields a set of improved semantic segmentation images. Due to our optimized CUDA implementation, we are able to exploit the entire c-dimensional probability distribution of annotations over c classes in an uncertainty-aware manner. We evaluate our method on the Scannet dataset where we improve annotations produced by the state-of-the-art segmentation network ESANet from 52.05% to 58.25% pixel accuracy. We publish the source code of our framework online to foster future research in this area (https://github.com/fferflo/semantic-meshes). To the best of our knowledge, this is the first publicly available label fusion framework for semantic image segmentation based on meshes with semantic textures.
引用
收藏
页码:509 / 516
页数:8
相关论文
共 50 条
  • [21] Manual-Protocol Inspired Technique for Improving Automated MR Image Segmentation during Label Fusion
    Bhagwat, Nikhil
    Pipitone, Jon
    Winterburn, Julie L.
    Guo, Ting
    Duerden, Emma G.
    Voineskos, Aristotle N.
    Lepage, Martin
    Miller, Steven P.
    Pruessner, Jens C.
    Chakravarty, M. Mallar
    FRONTIERS IN NEUROSCIENCE, 2016, 10
  • [22] Context Label Learning: Improving Background Class Representations in Semantic Segmentation
    Li, Zeju
    Kamnitsas, Konstantinos
    Ouyang, Cheng
    Chen, Chen
    Glocker, Ben
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (06) : 1885 - 1896
  • [23] Semantic Image Segmentation with Feature Fusion Based on Laplacian Pyramid
    Chen, Yongsheng
    NEURAL PROCESSING LETTERS, 2022, 54 (05) : 4153 - 4170
  • [24] Semantic Image Segmentation with Feature Fusion Based on Laplacian Pyramid
    Yongsheng Chen
    Neural Processing Letters, 2022, 54 : 4153 - 4170
  • [25] Multi-path Fusion Network For Semantic Image Segmentation
    Song, Hui
    Zhou, Yun
    Jiang, Zhuqing
    Guo, Xiaoqiang
    Yang, Zixuan
    2018 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2018, : 90 - 94
  • [26] STAIR FUSION NETWORK FOR REMOTE SENSING IMAGE SEMANTIC SEGMENTATION
    Hua, Wenyi
    Liu, Jia
    Liu, Fang
    Zhang, Wenhua
    An, Jiaqi
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 5499 - 5502
  • [27] CNN and Transformer Fusion for Remote Sensing Image Semantic Segmentation
    Chen, Xin
    Li, Dongfen
    Liu, Mingzhe
    Jia, Jiaru
    REMOTE SENSING, 2023, 15 (18)
  • [28] Sum-fusion and Cascaded interpolation for Semantic Image Segmentation
    Wang, Yan
    Hu, Jiani
    Deng, Weihong
    PROCEEDINGS 2017 4TH IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR), 2017, : 712 - 717
  • [29] Efficient Depth Fusion Transformer for Aerial Image Semantic Segmentation
    Yan, Li
    Huang, Jianming
    Xie, Hong
    Wei, Pengcheng
    Gao, Zhao
    REMOTE SENSING, 2022, 14 (05)
  • [30] Semantic Image Segmentation with Improved Position Attention and Feature Fusion
    Zhu, Hegui
    Miao, Yan
    Zhang, Xiangde
    NEURAL PROCESSING LETTERS, 2020, 52 (01) : 329 - 351