Improving Semantic Image Segmentation via Label Fusion in Semantically Textured Meshes

被引:0
|
作者
Fervers, Florian [1 ]
Breuer, Timo [1 ]
Stachowiak, Gregor [1 ]
Bullinger, Sebastian [1 ]
Bodensteiner, Christoph [1 ]
Arens, Michael [1 ]
机构
[1] Fraunhofer IOSB, D-76275 Ettlingen, Germany
来源
PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5 | 2022年
关键词
Semantic Segmentation; Mesh Reconstruction; Label Fusion;
D O I
10.5220/0010841800003124
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Models for semantic segmentation require a large amount of hand-labeled training data which is costly and time-consuming to produce. For this purpose, we present a label fusion framework that is capable of improving semantic pixel labels of video sequences in an unsupervised manner. We make use of a 3D mesh representation of the environment and fuse the predictions of different frames into a consistent representation using semantic mesh textures. Rendering the semantic mesh using the original intrinsic and extrinsic camera parameters yields a set of improved semantic segmentation images. Due to our optimized CUDA implementation, we are able to exploit the entire c-dimensional probability distribution of annotations over c classes in an uncertainty-aware manner. We evaluate our method on the Scannet dataset where we improve annotations produced by the state-of-the-art segmentation network ESANet from 52.05% to 58.25% pixel accuracy. We publish the source code of our framework online to foster future research in this area (https://github.com/fferflo/semantic-meshes). To the best of our knowledge, this is the first publicly available label fusion framework for semantic image segmentation based on meshes with semantic textures.
引用
收藏
页码:509 / 516
页数:8
相关论文
共 50 条
  • [41] Image segmentation via fractal transformation: Improving Ida's image segmentation scheme
    You, H
    Shiraki, N
    Tokunaga, R
    ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE, 2002, 85 (04): : 66 - 72
  • [42] Automatic image annotation via label transfer in the semantic space
    Uricchio, Tiberio
    Ballan, Lamberto
    Seidenari, Lorenzo
    Del Bimbo, Alberto
    PATTERN RECOGNITION, 2017, 71 : 144 - 157
  • [43] A label field fusion model with a variation of information estimator for image segmentation
    Mignotte, Max
    INFORMATION FUSION, 2014, 20 : 7 - 20
  • [44] iSTAPLE: Improved Label Fusion for Segmentation by Combining STAPLE with Image Intensity
    Liu, Xiaofeng
    Montillo, Albert
    Tan, Ek T.
    Schenck, John F.
    MEDICAL IMAGING 2013: IMAGE PROCESSING, 2013, 8669
  • [45] AFNet: Adaptive Fusion Network for Remote Sensing Image Semantic Segmentation
    Liu, Rui
    Mi, Li
    Chen, Zhenzhong
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (09): : 7871 - 7886
  • [46] DGFormer: A Dynamic Kernel with Gaussian Fusion Transformer for Semantic Image Segmentation
    Yang, Haoran
    Tang, Longyi
    Wu, Tingting
    Yan, Binyu
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT III, 2024, 15018 : 17 - 30
  • [47] Image Semantic Segmentation Fusion of Edge Detection and AFF Attention Mechanism
    Jiao, Yijie
    Wang, Xiaohua
    Wang, Wenjie
    Li, Shuang
    APPLIED SCIENCES-BASEL, 2022, 12 (21):
  • [48] A Review of Optical and SAR Image Deep Feature Fusion in Semantic Segmentation
    Liu, Chenfang
    Sun, Yuli
    Xu, Yanjie
    Sun, Zhongzhen
    Zhang, Xianghui
    Lei, Lin
    Kuang, Gangyao
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 12910 - 12930
  • [49] Remote Sensing Image Semantic Segmentation Network Based on Multimodal Fusion
    Hu, Yuxiang
    Yu, Changhong
    Gao, Ming
    Computer Engineering and Applications, 60 (15): : 234 - 242
  • [50] PCNN orchard heterologous image fusion with semantic segmentation of significance regions
    Xu, Wubo
    Liu, Liqun
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 216