Boosting Salient Object Detection With Transformer-Based Asymmetric Bilateral U-Net

被引:20
作者
Qiu, Yu [1 ]
Liu, Yun [2 ]
Zhang, Le [3 ]
Lu, Haotian [1 ]
Xu, Jing [1 ]
机构
[1] Nankai Univ, Coll Artificial Intelligence, Tianjin 300350, Peoples R China
[2] ASTAR, Inst Infocomm Res I2R, Singapore 138632, Singapore
[3] Univ Elect Sci & Technol China UESTC, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
基金
中国博士后科学基金;
关键词
Transformers; Decoding; Feature extraction; Refining; Object detection; Convolutional neural networks; Context modeling; Salient object detection; saliency detection; transformer; asymmetric bilateral U-Net; NETWORK; IMAGE;
D O I
10.1109/TCSVT.2023.3307693
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Existing salient object detection (SOD) methods mainly rely on U-shaped convolution neural networks (CNNs) with skip connections to combine the global contexts and local spatial details that are crucial for locating salient objects and refining object details, respectively. Despite great successes, the ability of CNNs in learning global contexts is limited. Recently, the vision transformer has achieved revolutionary progress in computer vision owing to its powerful modeling of global dependencies. However, directly applying the transformer to SOD is suboptimal because the transformer lacks the ability to learn local spatial representations. To this end, this paper explores the combination of transformers and CNNs to learn both global and local representations for SOD. We propose a transformer-based Asymmetric Bilateral U-Net (ABiU-Net). The asymmetric bilateral encoder has a transformer path and a lightweight CNN path, where the two paths communicate at each encoder stage to learn complementary global contexts and local spatial details, respectively. The asymmetric bilateral decoder also consists of two paths to process features from the transformer and CNN encoder paths, with communication at each decoder stage for decoding coarse salient object locations and fine-grained object details, respectively. Such communication between the two encoder/decoder paths enables AbiU-Net to learn complementary global and local representations, taking advantage of the natural merits of transformers and CNNs, respectively. Hence, ABiU-Net provides a new perspective for transformer-based SOD. Extensive experiments demonstrate that ABiU-Net performs favorably against previous state-of-the-art SOD methods.
引用
收藏
页码:2332 / 2345
页数:14
相关论文
共 96 条
[1]  
Cao Hu, 2021, arXiv
[2]  
Chen J., 2021, arXiv, DOI DOI 10.48550/ARXIV.2102.04306
[3]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[4]   Reverse Attention for Salient Object Detection [J].
Chen, Shuhan ;
Tan, Xiuli ;
Wang, Ben ;
Hu, Xuelong .
COMPUTER VISION - ECCV 2018, PT IX, 2018, 11213 :236-252
[5]  
Chen ZY, 2020, AAAI CONF ARTIF INTE, V34, P10599
[6]   Global Contrast Based Salient Region Detection [J].
Cheng, Ming-Ming ;
Mitra, Niloy J. ;
Huang, Xiaolei ;
Torr, Philip H. S. ;
Hu, Shi-Min .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (03) :569-582
[7]   SalientShape: group saliency in image collections [J].
Cheng, Ming-Ming ;
Mitra, Niloy J. ;
Huang, Xiaolei ;
Hu, Shi-Min .
VISUAL COMPUTER, 2014, 30 (04) :443-453
[8]   RepFinder: Finding Approximately Repeated Scene Elements for Image Editing [J].
Cheng, Ming-Ming ;
Zhang, Fang-Lue ;
Mitra, Niloy J. ;
Huang, Xiaolei ;
Hu, Shi-Min .
ACM TRANSACTIONS ON GRAPHICS, 2010, 29 (04)
[9]   Feature Aggregated Queries for Transformer-based Video Object Detectors [J].
Cui, Yiming .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, :6365-6376
[10]  
Cui Yiming, 2022, PROC ASIAN C COMPUT, P944