CubeNet: X-shape connection for camouflaged object detection

被引:72
作者
Zhuge, Mingchen [1 ,2 ]
Lu, Xiankai [1 ]
Guo, Yiyou [3 ]
Cai, Zhihua [2 ]
Chen, Shuhan [4 ]
机构
[1] Shandong Univ, Sch Software, Jinan, Peoples R China
[2] Chinese Univ Geosci, Sch Comp Sci, Wuhan, Peoples R China
[3] Tongji Univ, Coll Surveying & Geoinformat, Shanghai, Peoples R China
[4] Yangzhou Univ, Sch Informat Engn, Yangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Camouflaged object detection; Neural network; Edge guidance; Novel feature aggregation;
D O I
10.1016/j.patcog.2022.108644
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Camouflaged object detection (COD) aims to detect out-of-attention regions in an image. Current binary segmentation solutions fail to tackle COD easily, since COD is more challenging due to object often accompany with weak boundaries, low contrast, or similar patterns to the background. That is, we need a more efficient scheme to address this problem. In this work, we propose a new COD framework called CubeNet by introducing X connection to the standard encoder-decoder architecture. Specifically, CubeNet consists of two square fusion decoder (SFD) and a sub edge decoder (SED). The special designed SFD takes full advantage of low-level and high-level features extracted from encoder-decoder blocks, providing more powerful representations at each stage. To explicitly modeling the weak boundaries of the objects, we introduced a SED between the two SFD. With such kind of holistic designs, these three decoder modules resolve the challenging ambiguity of camouflaged object detection. CubeNet significantly advance the cutting-edge model on three challenging COD datasets (i.e., COD10K, CAMO, and CHAMELEON), and achieves the real-time (50fps) inference.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:10
相关论文
共 54 条
[1]   Measurements of the observed cross sections for e+e-→light hadrons at √S=3.773 and 3.650 GeV [J].
Ablikim, M. ;
Bai, J. Z. ;
Ban, Y. ;
Cai, X. ;
Chen, H. F. ;
Chen, H. S. ;
Chen, H. X. ;
Chen, J. C. ;
Chen, Jin ;
Chen, Y. B. ;
Chu, Y. P. ;
Dai, Y. S. ;
Diao, L. Y. ;
Deng, Z. Y. ;
Dong, Q. F. ;
Du, S. X. ;
Fang, J. ;
Fang, S. S. ;
Fu, C. D. ;
Gao, C. S. ;
Gao, Y. N. ;
Gu, S. D. ;
Gu, Y. T. ;
Guo, Y. N. ;
He, K. L. ;
He, M. ;
Heng, Y. K. ;
Hou, J. ;
Hu, H. M. ;
Hu, J. H. ;
Hu, T. ;
Huang, X. T. ;
Ji, X. B. ;
Jiang, X. S. ;
Jiang, X. Y. ;
Jiao, J. B. ;
Jin, D. P. ;
Jin, S. ;
Lai, Y. F. ;
Li, G. ;
Li, H. B. ;
Li, J. ;
Li, R. Y. ;
Li, S. M. ;
Li, W. D. ;
Li, W. G. ;
Li, X. L. ;
Li, X. N. ;
Li, X. Q. ;
Liang, Y. F. .
PHYSICS LETTERS B, 2007, 650 (2-3) :111-118
[2]   Mask R-CNN [J].
He, Kaiming ;
Gkioxari, Georgia ;
Dollar, Piotr ;
Girshick, Ross .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2980-2988
[3]   Hybrid Task Cascade for Instance Segmentation [J].
Chen, Kai ;
Pang, Jiangmiao ;
Wang, Jiaqi ;
Xiong, Yu ;
Li, Xiaoxiao ;
Sun, Shuyang ;
Feng, Wansen ;
Liu, Ziwei ;
Shi, Jianping ;
Ouyang, Wanli ;
Loy, Chen Change ;
Lin, Dahua .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4969-4978
[4]  
Chen Liang-Chieh, 2018, EUROPEAN C COMPUTER
[5]   EF-Net: A novel enhancement and fusion network for RGB-D saliency detection [J].
Chen, Qian ;
Fu, Keren ;
Liu, Ze ;
Chen, Geng ;
Du, Hongwei ;
Qiu, Bensheng ;
Shao, Ling .
PATTERN RECOGNITION, 2021, 112
[6]   Reverse Attention for Salient Object Detection [J].
Chen, Shuhan ;
Tan, Xiuli ;
Wang, Ben ;
Hu, Xuelong .
COMPUTER VISION - ECCV 2018, PT IX, 2018, 11213 :236-252
[7]   Global Contrast based Salient Region Detection [J].
Cheng, Ming-Ming ;
Zhang, Guo-Xin ;
Mitra, Niloy J. ;
Huang, Xiaolei ;
Hu, Shi-Min .
2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011, :409-416
[8]  
Fan D.-P., 2021, Scientia Sinica Informationis, P6, DOI 10.1360/SSI-2020-0370
[9]  
Fan D.-P., IEEE T PATTERN ANAL
[10]   Structure-measure: A New Way to Evaluate Foreground Maps [J].
Fan, Deng-Ping ;
Cheng, Ming-Ming ;
Liu, Yun ;
Li, Tao ;
Borji, Ali .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4558-4567