Camouflaged object detection with counterfactual intervention

被引:8
|
作者
Li, Xiaofei [1 ]
Li, Hongying [1 ]
Zhou, Hao [2 ]
Yu, Miaomiao [1 ]
Chen, Dong [3 ]
Li, Shuohao [1 ]
Zhang, Jun [1 ]
机构
[1] Natl Univ Def Technol, Lab Big Data & Decis, 109 Deya Rd, Changsha 410003, Hunan, Peoples R China
[2] Naval Univ Engn, Dept Operat & Planning, 717 Jianshe Ave, Wuhan 430033, Hubei, Peoples R China
[3] Natl Univ Def Technol, Sci & Technol Informat Syst Engn Lab, 109 Deya Rd, Changsha 410003, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Camouflaged object detection; Texture-aware; Context-aware; Counterfactual intervention; SEGMENTATION; NETWORK;
D O I
10.1016/j.neucom.2023.126530
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Camouflaged object detection (COD) aims to identify camouflaged objects hiding in their surroundings, which is a valuable yet challenging task. The main challenge is that there are ambiguous semantic biases in the camouflaged object datasets, which affect the results of COD. To address this challenge, we design a counter-factual intervention network (CINet) to mitigate the influences of ambiguous semantic biases and obtain accurate COD. Specifically, our CINet consists of three key modules, i.e., texture-aware interaction module (TIM), context-aware fusion module (CFM), and counterfactual intervention module (CIM). The TIM is designed to extract the refined textures for accurate localization, the CFM is proposed to fuse the multi-scale contextual features to enhance the detection performance, and the CIM is presented to learn more effective textures and make unbiased predictions. Unlike most existing COD methods that directly capture contextual features through the final loss function, we develop a counterfactual intervention strategy to learn more effective contextual textures. Extensive experiments on four challenging benchmark datasets demonstrate that our CINet significantly outperforms 31 state-of-the-art methods.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Stealth sight: A multi perspective approach for camouflaged object detection
    Domnic, S.
    Jayanthan, K. S.
    IMAGE AND VISION COMPUTING, 2025, 157
  • [22] Frequency-Spatial Entanglement Learning for Camouflaged Object Detection
    Sun, Yanguang
    Xu, Chunyan
    Yang, Jian
    Xuan, Hanyu
    Luo, Lei
    COMPUTER VISION - ECCV 2024, PT VI, 2025, 15064 : 343 - 360
  • [23] Frequency-Guided Spatial Adaptation for Camouflaged Object Detection
    Zhang, Shizhou
    Kong, Dexuan
    Xing, Yinghui
    Lu, Yue
    Ran, Lingyan
    Liang, Guoqiang
    Wang, Hexu
    Zhang, Yanning
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 72 - 83
  • [24] A survey on deep learning-based camouflaged object detection
    Zhong, Junmin
    Wang, Anzhi
    Ren, Chunhong
    Wu, Jintao
    MULTIMEDIA SYSTEMS, 2024, 30 (05)
  • [25] Semantic-aware representations for unsupervised Camouflaged Object Detection
    Lu, Zelin
    Zhao, Xing
    Xie, Liang
    Liang, Haoran
    Liang, Ronghua
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2025, 107
  • [26] Contextual feature fusion and refinement network for camouflaged object detection
    Yang, Jinyu
    Shi, Yanjiao
    Jiang, Ying
    Lu, Zixuan
    Yi, Yugen
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (03) : 1489 - 1505
  • [27] Frequency-aware Camouflaged Object Detection
    Lin, Jiaying
    Tan, Xin
    Xu, Ke
    Ma, Lizhuang
    Lau, Rynsonw. H.
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (02)
  • [28] Key Object Detection: Unifying Salient and Camouflaged Object Detection Into One Task
    Yin, Pengyu
    Fu, Keren
    Zhao, Qijun
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT XII, 2025, 15042 : 536 - 550
  • [29] Decoupling and Integration Network for Camouflaged Object Detection
    Zhou, Xiaofei
    Wu, Zhicong
    Cong, Runmin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 7114 - 7129
  • [30] Search and recovery network for camouflaged object detection
    Liu, Guangrui
    Wu, Wei
    IMAGE AND VISION COMPUTING, 2024, 151