Aggregating transformers and CNNs for salient object detection in optical remote sensing images

被引:21
作者
Bao, Liuxin [1 ]
Zhou, Xiaofei [1 ]
Zheng, Bolun [1 ]
Yin, Haibing [2 ,3 ]
Zhu, Zunjie [2 ,3 ]
Zhang, Jiyong [1 ]
Yan, Chenggang [1 ,2 ]
机构
[1] Hangzhou Dianzi Univ, Sch Automat, Hangzhou 310018, Peoples R China
[2] Hangzhou Dianzi Univ, Lishui Inst, Lishui 323000, Peoples R China
[3] Hangzhou Dianzi Univ, Sch Commun Engn, Hangzhou 310018, Peoples R China
基金
中国国家自然科学基金;
关键词
Transformer; CNNs; Feature fusion; Optical RSIs; Salient object detection; ENCODER-DECODER NETWORK; ATTENTION; FEATURES;
D O I
10.1016/j.neucom.2023.126560
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Salient object detection (SOD) in optical remote sensing images (RSIs) plays a significant role in many areas such as agriculture, environmental protection, and the military. However, since the difference in imaging mode and image complexity between RSIs and natural scene images (NSIs), it is difficult to achieve remarkable results by directly extending the saliency method targeting NSIs to RSIs. Besides, we note that the convolutional neural networks (CNNs) based U-Net cannot effectively acquire the global long-range dependency, and the Transformer doesn't adequately characterize the spatial local details of each patch. Therefore, to conduct salient object detection in RSIs, we propose a novel two-branch architecture based network for Aggregating the Transformers and CNNs, namely ATC-Net, where the local spatial details and the global semantic information are fused into the final high-quality saliency map. Specifically, our saliency model adopts an encoder-decoder architecture including two parallel encoder branches and a decoder. Firstly, the two parallel encoder branches extract global and local features by using Transformer and CNNs, respectively. Then, the decoder employs a series of featureenhanced fusion (FF) modules to aggregate multi-level global and local features by interactive guidance and enhance the fused feature via attention mechanism. Finally, the decoder deploys the read out (RO) module to fuse the aggregated feature of FF module and the low-level CNN feature, steering the feature to focus more on spatial local details. Extensive experiments are performed on two public optical RSIs datasets, and the results show that our saliency model consistently outperforms 30 state-of-the-art methods.
引用
收藏
页数:14
相关论文
共 100 条
  • [1] Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
  • [2] Aggregating Deep Convolutional Features for Image Retrieval
    Babenko, Artem
    Lempitsky, Victor
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1269 - 1277
  • [3] Salient object detection: A survey
    Borji, Ali
    Cheng, Ming-Ming
    Hou, Qibin
    Jiang, Huaizu
    Li, Jia
    [J]. COMPUTATIONAL VISUAL MEDIA, 2019, 5 (02) : 117 - 150
  • [4] Cao H., 2021, arXiv
  • [5] Carion Nicolas, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P213, DOI 10.1007/978-3-030-58452-8_13
  • [6] Chen J, 2021, arXiv
  • [7] BINet: Bidirectional interactive network for salient object detection
    Chen, Tianyou
    Hu, Xiaoguang
    Xiao, Jin
    Zhang, Guofeng
    Wang, Shaojie
    [J]. NEUROCOMPUTING, 2021, 465 : 490 - 502
  • [8] BPFINet: Boundary-aware progressive feature integration network for salient object detection
    Chen, Tianyou
    Hu, Xiaoguang
    Xiao, Jin
    Zhang, Guofeng
    [J]. NEUROCOMPUTING, 2021, 451 : 152 - 166
  • [9] Chen ZY, 2020, AAAI CONF ARTIF INTE, V34, P10599
  • [10] FusionNet: Edge Aware Deep Convolutional Networks for Semantic Segmentation of Remote Sensing Harbor Images
    Cheng, Dongcai
    Meng, Gaofeng
    Xiang, Shiming
    Pan, Chunhong
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2017, 10 (12) : 5769 - 5783