Learning Coupled Convolutional Networks Fusion for Video Saliency Prediction

被引:18
作者
Wu, Zhe [1 ,2 ]
Su, Li [1 ,2 ]
Huang, Qingming [1 ,2 ,3 ]
机构
[1] UCAS, Sch Comp & Control Engn, Beijing 101408, Peoples R China
[2] UCAS, Key Lab Big Data Min & Knowledge Management, Beijing 101408, Peoples R China
[3] Chinese Acad Sci, Key Lab Intelligent Informat Proc, Inst Comp Technol, Beijing 100190, Peoples R China
关键词
Video saliency; feature integration; fully convolutional network; SPATIOTEMPORAL SALIENCY; VISUAL SALIENCY; DETECTION MODEL; GAZE;
D O I
10.1109/TCSVT.2018.2870954
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Visual saliency provides important information for understanding scenes in many computer vision tasks. The existing video saliency algorithms mainly focus on predicting spatial and temporal saliency maps. However, these maps are simply fused without considering the complex dynamic scenes in videos. To overcome this drawback, we propose a deep convolutional fusion framework for video saliency prediction. The proposed model, which is based on coupled fully convolutional networks (FCNs), effectively encodes the spatiotemporal information by integrating spatial and temporal features. We demonstrate that this information is helpful for accurately fusing the spatial and temporal saliency maps according to changes in video scenes. In particular, we gradually design three different deep fusion architectures to investigate how to better utilize the spatiotemporal information. Moreover, we propose a reasonable sampling strategy for selecting suitable training sets for the coupled FCNs. Through extensive experiments, we demonstrate that our model outperforms the state-of-the-art algorithms on four public video saliency data sets.
引用
收藏
页码:2960 / 2971
页数:12
相关论文
共 51 条
[21]   Video Saliency Map Detection by Dominant Camera Motion Removal [J].
Huang, Chun-Rong ;
Chang, Yun-Jung ;
Yang, Zhi-Xiang ;
Lin, Yen-Yu .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2014, 24 (08) :1336-1349
[22]   SALICON: Reducing the Semantic Gap in Saliency Prediction by Adapting Deep Neural Networks [J].
Huang, Xun ;
Shen, Chengyao ;
Boix, Xavier ;
Zhao, Qi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :262-270
[23]   A model of saliency-based visual attention for rapid scene analysis [J].
Itti, L ;
Koch, C ;
Niebur, E .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1998, 20 (11) :1254-1259
[24]   Realistic avatar eye and head animation using a neurobiological model of visual attention [J].
Itti, L ;
Dhavale, N ;
Pighin, F .
APPLICATIONS AND SCIENCE OF NEURAL NETWORKS, FUZZY SYSTEMS, AND EVOLUTIONARY COMPUTATION VI, 2003, 5200 :64-78
[25]  
Itti L., 2006, Vision Research, P547, DOI [DOI 10.1016/J.VISRES.2008.09.007, 10.1016/j.visres.2008.09.007]
[26]   End-to-End Saliency Mapping via Probability Distribution Prediction [J].
Jetley, Saumya ;
Murray, Naila ;
Vig, Eleonora .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :5753-5761
[27]   Caffe: Convolutional Architecture for Fast Feature Embedding [J].
Jia, Yangqing ;
Shelhamer, Evan ;
Donahue, Jeff ;
Karayev, Sergey ;
Long, Jonathan ;
Girshick, Ross ;
Guadarrama, Sergio ;
Darrell, Trevor .
PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14), 2014, :675-678
[28]  
Jiang M, 2015, PROC CVPR IEEE, P1072, DOI 10.1109/CVPR.2015.7298710
[29]  
Jiang Y, 2009, 2009 3RD INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICAL ENGINEERING, VOLS 1-11, P2944
[30]  
Khatoonabadi SH, 2015, PROC CVPR IEEE, P5501, DOI 10.1109/CVPR.2015.7299189