A Benchmark Dataset and Saliency-Guided Stacked Autoencoders for Video-Based Salient Object Detection

被引:112
作者
Li, Jia [1 ,2 ]
Xia, Changqun [1 ]
Chen, Xiaowu [1 ]
机构
[1] Beihang Univ, Sch Comp Sci & Engn, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
[2] Beihang Univ, Int Res Inst Multidisciplinary Sci, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
Salient object detection; video dataset; stacked autoencoders; model benchmarking; CO-SEGMENTATION; DETECTION MODEL; OPTIMIZATION;
D O I
10.1109/TIP.2017.2762594
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image-based salient object detection (SOD) has been extensively studied in past decades. However, video-based SOD is much less explored due to the lack of large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos. In constructing the dataset, we manually annotate all objects and regions over 7650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects who free-view all videos. From the user data, we find that salient objects in a video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object/region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for video-based salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliency-guided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at the pixel, superpixel, and object levels. With these saliency cues, stacked autoencoders are constructed in an unsupervised manner that automatically infers a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. In experiments, the proposed unsupervised approach is compared with 31 state-of-the-art models on the proposed dataset and outperforms 30 of them, including 19 image-based classic (unsupervised or non-deep learning) models, six image-based deep learning models, and five video-based unsupervised models. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
引用
收藏
页码:349 / 364
页数:16
相关论文
共 73 条
[1]  
Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
[2]  
[Anonymous], 2008, PROC INT CONF PATTER
[3]  
[Anonymous], 2007, Computer Vision and Pattern Recognition (CVPR), IEEE Conference on
[4]  
Blank M, 2005, IEEE I CONF COMP VIS, P1395
[5]   What is a Salient Object? A Dataset and a Baseline Model for Salient Object Detection [J].
Borji, Ali .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (02) :742-756
[6]   Salient Object Detection: A Benchmark [J].
Borji, Ali ;
Sihite, Dicky N. ;
Itti, Laurent .
COMPUTER VISION - ECCV 2012, PT II, 2012, 7573 :414-429
[7]   Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation [J].
Brox, Thomas ;
Malik, Jitendra .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (03) :500-513
[8]  
Brox T, 2010, LECT NOTES COMPUT SC, V6315, P282, DOI 10.1007/978-3-642-15555-0_21
[9]   The role of memory in guiding attention during natural vision [J].
Carmi, Ran ;
Itti, Laurent .
JOURNAL OF VISION, 2006, 6 (09) :898-914
[10]   Global Contrast based Salient Region Detection [J].
Cheng, Ming-Ming ;
Zhang, Guo-Xin ;
Mitra, Niloy J. ;
Huang, Xiaolei ;
Hu, Shi-Min .
2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011, :409-416