Camouflaged object detection with counterfactual intervention

被引:8
|
作者
Li, Xiaofei [1 ]
Li, Hongying [1 ]
Zhou, Hao [2 ]
Yu, Miaomiao [1 ]
Chen, Dong [3 ]
Li, Shuohao [1 ]
Zhang, Jun [1 ]
机构
[1] Natl Univ Def Technol, Lab Big Data & Decis, 109 Deya Rd, Changsha 410003, Hunan, Peoples R China
[2] Naval Univ Engn, Dept Operat & Planning, 717 Jianshe Ave, Wuhan 430033, Hubei, Peoples R China
[3] Natl Univ Def Technol, Sci & Technol Informat Syst Engn Lab, 109 Deya Rd, Changsha 410003, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Camouflaged object detection; Texture-aware; Context-aware; Counterfactual intervention; SEGMENTATION; NETWORK;
D O I
10.1016/j.neucom.2023.126530
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Camouflaged object detection (COD) aims to identify camouflaged objects hiding in their surroundings, which is a valuable yet challenging task. The main challenge is that there are ambiguous semantic biases in the camouflaged object datasets, which affect the results of COD. To address this challenge, we design a counter-factual intervention network (CINet) to mitigate the influences of ambiguous semantic biases and obtain accurate COD. Specifically, our CINet consists of three key modules, i.e., texture-aware interaction module (TIM), context-aware fusion module (CFM), and counterfactual intervention module (CIM). The TIM is designed to extract the refined textures for accurate localization, the CFM is proposed to fuse the multi-scale contextual features to enhance the detection performance, and the CIM is presented to learn more effective textures and make unbiased predictions. Unlike most existing COD methods that directly capture contextual features through the final loss function, we develop a counterfactual intervention strategy to learn more effective contextual textures. Extensive experiments on four challenging benchmark datasets demonstrate that our CINet significantly outperforms 31 state-of-the-art methods.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Boundary enhancement and refinement network for camouflaged object detection
    Xia, Chenxing
    Cao, Huizhen
    Gao, Xiuju
    Ge, Bin
    Li, Kuan-Ching
    Fang, Xianjin
    Zhang, Yan
    Liang, Xingzhu
    MACHINE VISION AND APPLICATIONS, 2024, 35 (05)
  • [42] Depth alignment interaction network for camouflaged object detection
    Hongbo Bi
    Yuyu Tong
    Jiayuan Zhang
    Cong Zhang
    Jinghui Tong
    Wei Jin
    Multimedia Systems, 2024, 30
  • [43] Feature Aggregation and Propagation Network for Camouflaged Object Detection
    Zhou, Tao
    Zhou, Yi
    Gong, Chen
    Yang, Jian
    Zhang, Yu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 7036 - 7047
  • [44] Camouflaged Object Detection Based on Ternary Cascade Perception
    Jiang, Xinhao
    Cai, Wei
    Ding, Yao
    Wang, Xin
    Yang, Zhiyong
    Di, Xingyu
    Gao, Weijie
    REMOTE SENSING, 2023, 15 (05)
  • [45] Ternary symmetric fusion network for camouflaged object detection
    Yangyang Deng
    Jianxin Ma
    Yajun Li
    Min Zhang
    Li Wang
    Applied Intelligence, 2023, 53 : 25216 - 25231
  • [46] Camouflaged Object Detection with a Feature Lateral Connection Network
    Wang, Tao
    Wang, Jian
    Wang, Ruihao
    ELECTRONICS, 2023, 12 (12)
  • [47] OAFormer: Occlusion Aware Transformer for Camouflaged Object Detection
    Yang, Xin
    Zhu, Hengliang
    Mao, Guojun
    Xing, Shuli
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1421 - 1426
  • [48] Denoising Diffusion Implicit Model for Camouflaged Object Detection
    Cai, Wei
    Gao, Weijie
    Jiang, Xinhao
    Wang, Xin
    Di, Xingyu
    ELECTRONICS, 2024, 13 (18)
  • [49] MGL: Mutual Graph Learning for Camouflaged Object Detection
    Zhai, Qiang
    Li, Xin
    Yang, Fan
    Jiao, Zhicheng
    Luo, Ping
    Cheng, Hong
    Liu, Zicheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 1897 - 1910
  • [50] ForegroundNet: Domain Adaptive Transformer for Camouflaged Object Detection
    Liu, Zhouyong
    Luo, Shun
    Sun, Shilei
    Li, Chunguo
    Huang, Yongming
    Yang, Luxi
    IEEE SENSORS JOURNAL, 2024, 24 (14) : 21972 - 21986