Camouflaged Object Detection Based on Deep Learning with Attention-Guided Edge Detection and Multi-Scale Context Fusion

被引:1
|
作者
Wen, Yalin [1 ]
Ke, Wei [1 ,2 ]
Sheng, Hao [1 ,3 ,4 ]
机构
[1] Macao Polytech Univ, Fac Appl Sci, Macau 999078, Peoples R China
[2] Macao Polytech Univ, Engn Res Ctr Appl Technol Machine Translat & Artif, Minist Educ, Macau 999078, Peoples R China
[3] Beihang Univ, Sch Comp Sci & Engn, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
[4] Beihang Univ, Zhongfa Aviat Inst, Hangzhou 310000, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 06期
关键词
camouflaged object detection; EfficientNet; salient object detection; deep learning; NETWORK;
D O I
10.3390/app14062494
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In nature, objects that use camouflage have features like colors and textures that closely resemble their background. This creates visual illusions that help them hide and protect themselves from predators. This similarity also makes the task of detecting camouflaged objects very challenging. Methods for camouflaged object detection (COD), which rely on deep neural networks, are increasingly gaining attention. These methods focus on improving model performance and computational efficiency by extracting edge information and using multi-layer feature fusion. Our improvement is based on researching ways to enhance efficiency in the encode-decode process. We have developed a variant model that combines Swin Transformer (Swin-T) and EfficientNet-B7. This model integrates the strengths of both Swin-T and EfficientNet-B7, and it employs an attention-guided tracking module to efficiently extract edge information and identify objects in camouflaged environments. Additionally, we have incorporated dense skip links to enhance the aggregation of deep-level feature information. A boundary-aware attention module has been incorporated into the final layer of the initial shallow information recognition phase. This module utilizes the Fourier transform to quickly relay specific edge information from the initially obtained shallow semantics to subsequent stages, thereby more effectively achieving feature recognition and edge extraction. In the latter phase, which is focused on deep semantic extraction, we employ a dense skip joint attention module to enhance the decoder's performance and efficiency, ensuring accurate capture of deep-level information, feature recognition, and edge extraction. In the later stage of deep semantic extraction, we use a dense skip joint attention module to improve the decoder's performance and efficiency in capturing precise deep information. This module efficiently identifies the specifics and edge information of undetected camouflaged objects across channels and spaces. Differing from previous methods, we introduce an adaptive pixel strength loss function for handling key captured information. Our proposed method shows strong competitive performance on three current benchmark datasets (CHAMELEON, CAMO, COD10K). Compared to 26 previously proposed methods using 4 measurement metrics, our approach exhibits favorable competitiveness.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Underwater image object detection based on multi-scale feature fusion
    Yang, Chao
    Zhang, Ce
    Jiang, Longyu
    Zhang, Xinwen
    MACHINE VISION AND APPLICATIONS, 2024, 35 (06)
  • [32] Progressive Guided Fusion Network With Multi-Modal and Multi-Scale Attention for RGB-D Salient Object Detection
    Wu, Jiajia
    Han, Guangliang
    Wang, Haining
    Yang, Hang
    Li, Qingqing
    Liu, Dongxu
    Ye, Fangjian
    Liu, Peixun
    IEEE ACCESS, 2021, 9 : 150608 - 150622
  • [33] Multi-level cross-knowledge fusion with edge guidance for camouflaged object detection
    Sun, Wei
    Wang, Qianzhou
    Tian, Yulong
    Yang, Xiaobao
    Kong, Xianguang
    Dong, Yizhuo
    Zhang, Yanning
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [34] Camouflage Object Detection Based on Feature Fusion and Edge Detection
    Ding, Cheng
    Bai, Xueqiong
    Lv, Yong
    Liu, Yang
    Niu, Chunhui
    Liu, Xin
    ACTA PHOTONICA SINICA, 2024, 53 (08)
  • [35] Attention guided multi-level feature aggregation network for camouflaged object detection
    Wang, Anzhi
    Ren, Chunhong
    Zhao, Shuang
    Mu, Shibiao
    IMAGE AND VISION COMPUTING, 2024, 144
  • [36] BSEFNet: bidirectional self-attention edge fusion network salient object detection based on deep fusion of edge features
    Gao, Gan
    Wang, Yuanyuan
    Zhou, Feng
    Chen, Shuaiting
    Ge, Xiaole
    Wang, Rugang
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [37] Boundary Guided Feature Fusion Network for Camouflaged Object Detection
    Qiu, Tianchi
    Li, Xiuhong
    Liu, Kangwei
    Li, Songlin
    Chen, Fan
    Zhou, Chenyu
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 433 - 444
  • [38] Military camouflaged object detection with deep learning using dataset development and combination
    Hwang, Kyo-Seong
    Ma, Jungmok
    JOURNAL OF DEFENSE MODELING AND SIMULATION-APPLICATIONS METHODOLOGY TECHNOLOGY-JDMS, 2024,
  • [39] Occlusion Handling and Multi-Scale Pedestrian Detection Based on Deep Learning: A Review
    Li, Fang
    Li, Xueyuan
    Liu, Qi
    Li, Zirui
    IEEE ACCESS, 2022, 10 : 19937 - 19957
  • [40] Semantic-spatial guided context propagation network for camouflaged object detection
    Ren, Junchao
    Zhang, Qiao
    Kang, Bingbing
    Zhong, Yuxi
    He, Min
    Ge, Yanliang
    Bi, Hongbo
    APPLIED INTELLIGENCE, 2025, 55 (05)