Salient Object Detection via Multiple Instance Joint Re-Learning

被引:49
作者
Ma, Guangxiao [1 ]
Chen, Chenglizhao [1 ,2 ]
Li, Shuai [3 ]
Peng, Chong [2 ]
Hao, Aimin [1 ,3 ]
Qin, Hong [4 ]
机构
[1] Beihang Univ, Qingdao Res Inst, Qingdao 266100, Peoples R China
[2] Qingdao Univ, Qingdao 266071, Peoples R China
[3] Beihang Univ, Sch Comp Sci & Engn, Beijing 100191, Peoples R China
[4] SUNY Stony Brook, Stony Brook, NY 11794 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Feature extraction; Object detection; Saliency detection; Visualization; Deep learning; Correlation; Topology; Salient Object Detection Inter-image Corres-pondence; Multiple Instance Learning; Joint Re-Learning; VIDEO; OPTIMIZATION; DRIVEN; FUSION; MODEL;
D O I
10.1109/TMM.2019.2929943
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years deep neural networks have been widely applied to visual saliency detection tasks with remarkable detection performance improvements. As for the salient object detection in single image, the automatically computed convolutional features frequently demonstrate high discriminative power to distinguish salient foregrounds from its non-salient surroundings in most cases. Yet, the obstinate feature conflicts still persist, which naturally gives rise to the learning ambiguity, arriving at massive failure detections. To solve such problem, we propose to jointly re-learn common consistency of inter-image saliency and then use it to boost the detection performance. Its core rationale is to utilize the easy-to-detect cases to re-boost much harder ones. Compared with the conventional methods, which focus on their problem domain within the single image scope, our method attempts to utilize those beyond-scope information to facilitate the current salient object detection. To validate our new approach, we have conducted a comprehensive quantitative comparisons between our approach and 13 state-of-the-art methods over 5 publicly available benchmarks, and all the results suggest the advantage of our approach in terms of accuracy, reliability, and versatility.
引用
收藏
页码:324 / 336
页数:13
相关论文
共 68 条
[1]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[2]  
Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
[3]  
[Anonymous], NIPS 2011
[4]  
Bi S., 2014, J IMAGE GRAPH, V2
[5]   Salient Object Detection: A Benchmark [J].
Borji, Ali ;
Cheng, Ming-Ming ;
Jiang, Huaizu ;
Li, Jia .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) :5706-5722
[6]   State-of-the-Art in Visual Attention Modeling [J].
Borji, Ali ;
Itti, Laurent .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (01) :185-207
[7]   Structure-Aware Adaptive Diffusion for Video Saliency Detection [J].
Chen, Chenglizhao ;
Wang, Guotao ;
Peng, Chong .
IEEE ACCESS, 2019, 7 :79770-79782
[8]   Bilevel Feature Learning for Video Saliency Detection [J].
Chen, Chenglizhao ;
Li, Shuai ;
Qin, Hong ;
Pan, Zhenkuan ;
Yang, Guowei .
IEEE TRANSACTIONS ON MULTIMEDIA, 2018, 20 (12) :3324-3336
[9]   A Novel Bottom-Up Saliency Detection Method for Video With Dynamic Background [J].
Chen, Chenglizhao ;
Li, Yunxiao ;
Li, Shuai ;
Qin, Hong ;
Hao, Aimin .
IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (02) :154-158
[10]   Video Saliency Detection via Spatial-Temporal Fusion and Low-Rank Coherency Diffusion [J].
Chen, Chenglizhao ;
Li, Shuai ;
Wang, Yongguang ;
Qin, Hong ;
Hao, Aimin .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (07) :3156-3170