Deeply Supervised Salient Object Detection with Short Connections

被引:582
作者
Hou, Qibin [1 ]
Cheng, Ming-Ming [1 ]
Hu, Xiaowei [1 ]
Borji, Ali [2 ]
Tu, Zhuowen [3 ]
Torr, Philip H. S. [4 ]
机构
[1] Nankai Univ, CCCE, Nankai 300071, Qu, Peoples R China
[2] Univ Cent Florida, Ctr Res Comp Vis, Orlando, FL 32816 USA
[3] Univ Calif San Diego, La Jolla, CA 92093 USA
[4] Univ Oxford, Oxford OX1 2JD, England
基金
英国工程与自然科学研究理事会;
关键词
Salient object detection; short connection; deeply supervised network; semantic segmentation; edge detection; IMAGE; ATTENTION; GRAPHICS; MODEL;
D O I
10.1109/TPAMI.2018.2815688
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent progress on salient object detection is substantial, benefiting mostly from the explosive development of Convolutional Neural Networks (CNNs). Semantic segmentation and salient object detection algorithms developed lately have been mostly based on Fully Convolutional Neural Networks (FCNs). There is still a large room for improvement over the generic FCN models that do not explicitly deal with the scale-space problem. The Holistically-Nested Edge Detector (HED) provides a skip-layer structure with deep supervision for edge and boundary detection, but the performance gain of HED on saliency detection is not obvious. In this paper, we propose a new salient object detection method by introducing short connections to the skip-layer structures within the HED architecture. Our framework takes full advantage of multi-level and multi-scale features extracted from FCNs, providing more advanced representations at each layer, a property that is critically needed to perform segment detection. Our method produces state-of-the-art results on 5 widely tested salient object detection benchmarks, with advantages in terms of efficiency (0.08 seconds per image), effectiveness, and simplicity over the existing algorithms. Beyond that, we conduct an exhaustive analysis of the role of training data on performance. We provide a training set for future research and fair comparisons.
引用
收藏
页码:815 / 828
页数:14
相关论文
共 65 条
[1]  
[Anonymous], 2017, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2017.487
[2]  
[Anonymous], PROC CVPR IEEE
[3]  
[Anonymous], 2017, COMMUN ACM, DOI DOI 10.1145/3065386
[4]  
[Anonymous], 2015, P 3 INT C LEARN REPR
[5]  
[Anonymous], 2015, IEEE T IMAGE PROCESS, DOI DOI 10.1109/TIP.2015.2487833
[6]  
[Anonymous], 2015, PROC CVPR IEEE
[7]  
[Anonymous], P IEEE C COMP VIS PA
[8]  
[Anonymous], 2017, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2017.622
[9]  
[Anonymous], 2015, COMPUT VIS MEDIA
[10]  
[Anonymous], 2016, COMPUT VISUAL MEDIA, DOI DOI 10.1007/S41095-016-0033-9