Object Segmentation by Mining Cross-Modal Semantics

被引:6
作者
Wu, Zongwei [1 ,2 ]
Wang, Jingjing [3 ]
Zhou, Zhuyun [4 ]
An, Zhaochong [5 ]
Jiang, Qiuping [6 ]
Demonceaux, Cedric [4 ]
Sun, Guolei [5 ]
Timofte, Radu [1 ,2 ]
机构
[1] Univ Wurzburg, Comp Vis Lab, CAIDAS, Wurzburg, Germany
[2] Univ Wurzburg, IFI, Wurzburg, Germany
[3] Anhui Univ Sci & Technol, Huainan, Peoples R China
[4] Univ Burgundy, CNRS, ICB, Dijon, France
[5] Swiss Fed Inst Technol, CVL, Zurich, Switzerland
[6] Ningbo Univ, Ningbo, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
关键词
RGB-X Object Segmentation; Cross-Modal Semantics; Robustness; NETWORK;
D O I
10.1145/3581783.3611970
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-sensor clues have shown promise for object segmentation, but inherent noise in each sensor, as well as the calibration error in practice, may bias the segmentation accuracy. In this paper, we propose a novel approach by mining the Cross-Modal Semantics to guide the fusion and decoding of multimodal features, with the aim of controlling the modal contribution based on relative entropy. We explore semantics among the multimodal inputs in two aspects: the modality-shared consistency and the modality-specific variation. Specifically, we propose a novel network, termed XMSNet, consisting of (1) all-round attentive fusion (AF), (2) coarse-to-fine decoder (CFD), and (3) cross-layer self-supervision. On the one hand, the AF block explicitly dissociates the shared and specific representation and learns to weight the modal contribution by adjusting the proportion, region, and pattern, depending upon the quality. On the other hand, our CFD initially decodes the shared feature and then refines the output through specificity-aware querying. Further, we enforce semantic consistency across the decoding layers to enable interaction across network hierarchies, improving feature discriminability. Exhaustive comparison on eleven datasets with depth or thermal clues, and on two challenging tasks, namely salient and camouflage object segmentation, validate our effectiveness in terms of both performance and robustness. The source code is publicly available at https://github.com/Zongwei97/XMSNet.
引用
收藏
页码:3455 / 3464
页数:10
相关论文
共 86 条
  • [1] Adams Haley, 2022, IEEE VR
  • [2] Bai Xuyang, 2022, IEEE CVF
  • [3] Budzier H., 2011, Thermal Infrared Sensors: Theory, Optimisation and Practice
  • [4] Chen Gang, 2022, IEEE TCSVT
  • [5] Chen Qian, 2022, IEEE TNNLS
  • [6] Cheng Kelvin, 2022, IEEE ISMAR ADJ
  • [7] CoopEdge: Cost-effective Server Deployment for Cooperative Multi-Access Edge Computing
    Cong, Rong
    Zhao, Zhiwei
    Zhang, Linyuanqi
    Min, Geyong
    [J]. 2022 19TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING (SECON), 2022, : 208 - 216
  • [8] Deng-Ping Fan, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12357), P275, DOI 10.1007/978-3-030-58610-2_17
  • [9] Camouflaged Object Detection
    Fan, Deng-Ping
    Ji, Ge-Peng
    Sun, Guolei
    Cheng, Ming-Ming
    Shen, Jianbing
    Shao, Ling
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2774 - 2784
  • [10] Rethinking RGB-D Salient Object Detection: Models, Data Sets, and Large-Scale Benchmarks
    Fan, Deng-Ping
    Lin, Zheng
    Zhang, Zhao
    Zhu, Menglong
    Cheng, Ming-Ming
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (05) : 2075 - 2089