Learning Coupled Convolutional Networks Fusion for Video Saliency Prediction

被引:18
作者
Wu, Zhe [1 ,2 ]
Su, Li [1 ,2 ]
Huang, Qingming [1 ,2 ,3 ]
机构
[1] UCAS, Sch Comp & Control Engn, Beijing 101408, Peoples R China
[2] UCAS, Key Lab Big Data Min & Knowledge Management, Beijing 101408, Peoples R China
[3] Chinese Acad Sci, Key Lab Intelligent Informat Proc, Inst Comp Technol, Beijing 100190, Peoples R China
关键词
Video saliency; feature integration; fully convolutional network; SPATIOTEMPORAL SALIENCY; VISUAL SALIENCY; DETECTION MODEL; GAZE;
D O I
10.1109/TCSVT.2018.2870954
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Visual saliency provides important information for understanding scenes in many computer vision tasks. The existing video saliency algorithms mainly focus on predicting spatial and temporal saliency maps. However, these maps are simply fused without considering the complex dynamic scenes in videos. To overcome this drawback, we propose a deep convolutional fusion framework for video saliency prediction. The proposed model, which is based on coupled fully convolutional networks (FCNs), effectively encodes the spatiotemporal information by integrating spatial and temporal features. We demonstrate that this information is helpful for accurately fusing the spatial and temporal saliency maps according to changes in video scenes. In particular, we gradually design three different deep fusion architectures to investigate how to better utilize the spatiotemporal information. Moreover, we propose a reasonable sampling strategy for selecting suitable training sets for the coupled FCNs. Through extensive experiments, we demonstrate that our model outperforms the state-of-the-art algorithms on four public video saliency data sets.
引用
收藏
页码:2960 / 2971
页数:12
相关论文
共 51 条
  • [1] SLIC Superpixels Compared to State-of-the-Art Superpixel Methods
    Achanta, Radhakrishna
    Shaji, Appu
    Smith, Kevin
    Lucchi, Aurelien
    Fua, Pascal
    Suesstrunk, Sabine
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) : 2274 - 2281
  • [2] [Anonymous], 2014, ADV NEURAL INFORM PR
  • [3] [Anonymous], 2015, GOOD PRACTICES VERY
  • [4] [Anonymous], 2009, P 17 ACM INT C MULT, DOI DOI 10.1145/1631272.1631370
  • [5] [Anonymous], 2016, RECURRENT MIXTURE DE
  • [6] [Anonymous], 2016, DEEPLAB SEMANTIC IMA
  • [7] [Anonymous], PREDICTING HUMAN EYE
  • [8] Bian P, 2009, LECT NOTES COMPUT SC, V5506, P251, DOI 10.1007/978-3-642-02490-0_31
  • [9] Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study
    Borji, Ali
    Sihite, Dicky N.
    Itti, Laurent
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (01) : 55 - 69
  • [10] State-of-the-Art in Visual Attention Modeling
    Borji, Ali
    Itti, Laurent
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (01) : 185 - 207