ELD-Net: An Efficient Deep Learning Architecture for Accurate Saliency Detection

被引:43
作者
Lee, Gayoung [1 ]
Tai, Yu-Wing [2 ]
Kim, Junmo [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon 305701, South Korea
[2] Youtu Lab Tencent SNG, Shenzhen, Peoples R China
关键词
Salient region detection; feature extraction; superpixel; deep learning; convolutional neural network (CNN); OBJECT DETECTION; NETWORKS;
D O I
10.1109/TPAMI.2017.2737631
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in saliency detection have utilized deep learning to obtain high-level features to detect salient regions in scenes. These advances have yielded results superior to those reported in past work, which involved the use of hand-crafted low-level features for saliency detection. In this paper, we propose ELD-Net, a unified deep learning framework for accurate and efficient saliency detection. We show that hand-crafted features can provide complementary information to enhance saliency detection that uses only high-level features. Our method uses both low-level and high-level features for saliency detection. High-level features are extracted using GoogLeNet, and low-level features evaluate the relative importance of a local region using its differences from other regions in an image. The two feature maps are independently encoded by the convolutional and the ReLU layers. The encoded low-level and high-level features are then combined by concatenation and convolution. Finally, a linear fully connected layer is used to evaluate the saliency of a queried region. A full resolution saliency map is obtained by querying the saliency of each local region of an image. Since the high-level features are encoded at low resolution, and the encoded high-level features can be reused for every query region, our ELD-Net is very fast. Our experiments show that our method outperforms state-of-the-art deep learning-based saliency detection methods.
引用
收藏
页码:1599 / 1610
页数:12
相关论文
共 43 条
[1]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[2]  
Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
[3]  
[Anonymous], 2015, ARXIV PREPRINT ARXIV
[4]  
[Anonymous], 2015, P 3 INT C LEARN REPR
[5]  
[Anonymous], 2015, IEEE T IMAGE PROCESS, DOI DOI 10.1109/TIP.2015.2487833
[6]  
[Anonymous], 2015, PROC CVPR IEEE
[7]  
[Anonymous], 2015, ARXIV150904232
[8]  
[Anonymous], 2007, Computer Vision and Pattern Recognition (CVPR), IEEE Conference on
[9]  
[Anonymous], 2014, COMPUT VISUAL MEDIA
[10]  
[Anonymous], 2001, PROC 18 INT C MACH L