Adaptive Spatial Tokenization Transformer for Salient Object Detection in Optical Remote Sensing Images

被引:29
作者
Gao, Lina [1 ]
Liu, Bing [1 ]
Fu, Ping [1 ]
Xu, Mingzhu [2 ]
机构
[1] Harbin Inst Technol, Sch Elect & Informat Engn, Harbin 150001, Peoples R China
[2] Shandong Univ, Sch Software, Jinan 250101, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2023年 / 61卷
基金
中国国家自然科学基金;
关键词
Transformers; Adaptation models; Object detection; Tokenization; Optical imaging; Optical sensors; Feature extraction; Adaptive tokenization; optical remote sensing images (ORSIs); salient object detection (SOD); transformer; REGION DETECTION; TARGET DETECTION; NETWORK;
D O I
10.1109/TGRS.2023.3242987
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Convolutional neural network (CNN)-based salient object detection (SOD) models have achieved promising performance in optical remote sensing images (ORSIs) in recent years. However, the restriction concerning the local sliding window operation of CNN has caused many existing CNN-based ORSI SOD models to still struggle with learning long-range relationships. To this end, a novel transformer framework is proposed for ORSI SOD, which is inspired by the powerful global dependency relationships of transformer networks. This is the first attempt to explore global and local details using transformer architecture for SOD in ORSIs. Concretely, we design an adaptive spatial tokenization transformer encoder to extract global-local features, which can accurately sparsify tokens for each input image and achieve competitive performance in ORSI SOD tasks. Then, a specific dense token aggregation decoder (DTAD) is proposed to generate saliency results, including three cascade decoders to integrate the global-local tokens and contextual dependencies. Extensive experiments indicate that the proposed model greatly surpasses 20 state-of-the-art (SOTA) SOD approaches on two standard ORSI SOD datasets under seven evaluation metrics. We also report comparison results to demonstrate the generalization capacity on the latest challenging ORSI datasets. In addition, we validate the contributions of different modules through a series of ablation analyses, especially the proposed adaptive spatial tokenization module (ASTM), which can halve the computational budget.
引用
收藏
页数:15
相关论文
共 81 条
[1]  
Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
[2]   Salient Object Detection: A Benchmark [J].
Borji, Ali ;
Cheng, Ming-Ming ;
Jiang, Huaizu ;
Li, Jia .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) :5706-5722
[3]   Salient object detection: A survey [J].
Borji, Ali ;
Cheng, Ming-Ming ;
Hou, Qibin ;
Jiang, Huaizu ;
Li, Jia .
COMPUTATIONAL VISUAL MEDIA, 2019, 5 (02) :117-150
[4]  
Borji A, 2012, PROC CVPR IEEE, P478, DOI 10.1109/CVPR.2012.6247711
[5]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[6]   Adaptive Image Transformer for One-Shot Object Detection [J].
Chen, Ding-Jie ;
Hsieh, He-Yen ;
Liu, Tyng-Luh .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :12242-12251
[7]   Reverse Attention-Based Residual Network for Salient Object Detection [J].
Chen, Shuhan ;
Tan, Xiuli ;
Wang, Ben ;
Lu, Huchuan ;
Hu, Xuelong ;
Fu, Yun .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :3763-3776
[8]  
Chen ZY, 2020, AAAI CONF ARTIF INTE, V34, P10599
[9]   Global Contrast based Salient Region Detection [J].
Cheng, Ming-Ming ;
Zhang, Guo-Xin ;
Mitra, Niloy J. ;
Huang, Xiaolei ;
Hu, Shi-Min .
2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011, :409-416
[10]   Global-and-Local Collaborative Learning for Co-Salient Object Detection [J].
Cong, Runmin ;
Yang, Ning ;
Li, Chongyi ;
Fu, Huazhu ;
Zhao, Yao ;
Huang, Qingming ;
Kwong, Sam .
IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (03) :1920-1931