Weakly Supervised RGB-D Salient Object Detection With Prediction Consistency Training and Active Scribble Boosting

被引:41
作者
Xu, Yunqiu [1 ,2 ]
Yu, Xin [1 ]
Zhang, Jing [3 ]
Zhu, Linchao [1 ]
Wang, Dadong [4 ]
机构
[1] Univ Technol Sydney, Australian Artificial Intelligence Inst AAII, ReIER Lab, Ultimo, NSW 2007, Australia
[2] Baidu Res, Beijing 100085, Peoples R China
[3] Australian Natl Univ, Sch Comp, Canberra, ACT 0200, Australia
[4] CSIRO, Data61, Quantitat Imaging Res Team, Marsfield, NSW 2122, Australia
基金
澳大利亚研究理事会;
关键词
Training; Image edge detection; Annotations; Feature extraction; Task analysis; Object detection; Data mining; RGB-D salient object detection; weakly supervised learning; NETWORK; FUSION; REPRESENTATION;
D O I
10.1109/TIP.2022.3151999
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
RGB-D salient object detection (SOD) has attracted increasingly more attention as it shows more robust results in complex scenes compared with RGB SOD. However, state-of-the-art RGB-D SOD approaches heavily rely on a large amount of pixel-wise annotated data for training. Such densely labeled annotations are often labor-intensive and costly. To reduce the annotation burden, we investigate RGB-D SOD from a weakly supervised perspective. More specifically, we use annotator-friendly scribble annotations as supervision signals for model training. Since scribble annotations are much sparser compared to ground-truth masks, some critical object structure information might be neglected. To preserve such structure information, we explicitly exploit the complementary edge information from two modalities (i.e., RGB and depth). Specifically, we leverage the dual-modal edge guidance and introduce a new network architecture with a dual-edge detection module and a modality-aware feature fusion module. In order to use the useful information of unlabeled pixels, we introduce a prediction consistency training scheme by comparing the predictions of two networks optimized by different strategies. Moreover, we develop an active scribble boosting strategy to provide extra supervision signals with negligible annotation cost, leading to significant SOD performance improvement. Extensive experiments on seven benchmarks validate the superiority of our proposed method. Remarkably, the proposed method with scribble annotations achieves competitive performance in comparison to fully supervised state-of-the-art methods.
引用
收藏
页码:2148 / 2161
页数:14
相关论文
共 90 条
[1]   Cascade Graph Neural Networks for RGB-D Salient Object Detection [J].
Luo, Ao ;
Li, Xin ;
Yang, Fan ;
Jiao, Zhicheng ;
Cheng, Hong ;
Lyu, Siwei .
COMPUTER VISION - ECCV 2020, PT XII, 2020, 12357 :346-364
[2]   Salient Object Detection: A Benchmark [J].
Borji, Ali ;
Cheng, Ming-Ming ;
Jiang, Huaizu ;
Li, Jia .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) :5706-5722
[4]   Depth-Quality-Aware Salient Object Detection [J].
Chen, Chenglizhao ;
Wei, Jipeng ;
Peng, Chong ;
Qin, Hong .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :2350-2363
[5]   Improved Saliency Detection in RGB-D Images Using Two-Phase Depth Estimation and Selective Deep Fusion [J].
Chen, Chenglizhao ;
Wei, Jipeng ;
Peng, Chong ;
Zhang, Weizhong ;
Qin, Hong .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4296-4307
[6]   Three-Stream Attention-Aware Network for RGB-D Salient Object Detection [J].
Chen, Hao ;
Li, Youfu .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (06) :2825-2835
[7]   RGBD Salient Object Detection via Disentangled Cross-Modal Fusion [J].
Chen, Hao ;
Deng, Yongjian ;
Li, Youfu ;
Hung, Tzu-Yi ;
Lin, Guosheng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 (29) :8407-8416
[8]   Multi-modal fusion network with multi-scale multi-path and cross-modal interactions for RGB-D salient object detection [J].
Chen, Hao ;
Li, Youfu ;
Su, Dan .
PATTERN RECOGNITION, 2019, 86 :376-385
[9]  
Chen LB, 2017, IEEE INT SYMP NANO, P1, DOI 10.1109/NANOARCH.2017.8053709
[10]  
Chen Q, 2021, AAAI CONF ARTIF INTE, V35, P1063