DMRA: Depth-Induced Multi-Scale Recurrent Attention Network for RGB-D Saliency Detection

被引:53
作者
Ji, Wei [1 ,2 ]
Yan, Ge [2 ]
Li, Jingjing [1 ,2 ]
Piao, Yongri [3 ]
Yao, Shunyu [2 ]
Zhang, Miao [4 ]
Cheng, Li [1 ]
Lu, Huchuan [3 ]
机构
[1] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T5V 1A4, Canada
[2] Dalian Univ Technol, Sch Software Technol, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Informat & Commun Engn, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
[4] Dalian Univ Technol, DUT RU Int Sch Informat & Software Engn, Key Lab Ubiquitous Network & Serv Software Liaoni, Dalian 116024, Peoples R China
基金
中国国家自然科学基金; 加拿大自然科学与工程研究理事会;
关键词
Feature extraction; Saliency detection; Semantics; Random access memory; Cameras; Analytical models; Visualization; RGB-D saliency detection; salient object detection; convolutional neural networks; cross-modal fusion; OBJECT DETECTION; FUSION; SEGMENTATION;
D O I
10.1109/TIP.2022.3154931
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we propose a novel depth-induced multi-scale recurrent attention network for RGB-D saliency detection, named as DMRA. It achieves dramatic performance especially in complex scenarios. There are four main contributions of our network that are experimentally demonstrated to have significant practical merits. First, we design an effective depth refinement block using residual connections to fully extract and fuse cross-modal complementary cues from RGB and depth streams. Second, depth cues with abundant spatial information are innovatively combined with multi-scale contextual features for accurately locating salient objects. Third, a novel recurrent attention module inspired by Internal Generative Mechanism of human brain is designed to generate more accurate saliency results via comprehensively learning the internal semantic relation of the fused feature and progressively optimizing local details with memory-oriented scene understanding. Finally, a cascaded hierarchical feature fusion strategy is designed to promote efficient information interaction of multi-level contextual features and further improve the contextual representability of model. In addition, we introduce a new real-life RGB-D saliency dataset containing a variety of complex scenarios that has been widely used as a benchmark dataset in recent RGB-D saliency detection research. Extensive empirical experiments demonstrate that our method can accurately identify salient objects and achieve appealing performance against 18 state-of-the-art RGB-D saliency models on nine benchmark datasets.
引用
收藏
页码:2321 / 2336
页数:16
相关论文
共 110 条
[1]  
Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
[2]  
[Anonymous], 2015, P 7 INT C INT MULT C
[3]  
[Anonymous], 2016, PROC 4 INT C LEARN R
[4]  
[Anonymous], 2015, 3 INT C LEARN REPR I
[5]   Cascade Graph Neural Networks for RGB-D Salient Object Detection [J].
Luo, Ao ;
Li, Xin ;
Yang, Fan ;
Jiao, Zhicheng ;
Cheng, Hong ;
Lyu, Siwei .
COMPUTER VISION - ECCV 2020, PT XII, 2020, 12357 :346-364
[6]   Circular Complement Network for RGB-D Salient Object Detection [J].
Bai, Zhen ;
Liu, Zhi ;
Li, Gongyang ;
Ye, Linwei ;
Wang, Yang .
NEUROCOMPUTING, 2021, 451 :95-106
[7]   Salient Object Detection: A Benchmark [J].
Borji, Ali ;
Cheng, Ming-Ming ;
Jiang, Huaizu ;
Li, Jia .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) :5706-5722
[8]   Improved Saliency Detection in RGB-D Images Using Two-Phase Depth Estimation and Selective Deep Fusion [J].
Chen, Chenglizhao ;
Wei, Jipeng ;
Peng, Chong ;
Zhang, Weizhong ;
Qin, Hong .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4296-4307
[9]   Three-Stream Attention-Aware Network for RGB-D Salient Object Detection [J].
Chen, Hao ;
Li, Youfu .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (06) :2825-2835
[10]   Progressively Complementarity-aware Fusion Network for RGB-D Salient Object Detection [J].
Chen, Hao ;
Li, Youfu .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3051-3060