Going from Image to Video Saliency: Augmenting Image Salience with Dynamic Attentional Push

被引:42
作者
Gorji, Siavash [1 ]
Clark, James J. [1 ]
机构
[1] McGill Univ, Dept Elect & Comp Engn, Ctr Intelligent Machines, Montreal, PQ, Canada
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
关键词
VISUAL-ATTENTION; DETECTION MODEL; SCENE; GAZE; EYES;
D O I
10.1109/CVPR.2018.00783
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a novel method to incorporate the recent advent in static saliency models to predict the saliency in videos. Our model augments the static saliency models with the Attentional Push effect of the photographer and the scene actors in a shared attention setting. We demonstrate that not only it is imperative to use static Attentional Push cues, noticeable performance improvement is achievable by learning the time-varying nature of Attentional Push. We propose a multi-stream Convolutional Long Short-Term Memory network (ConvLSTM) structure which augments state-of-the-art in static saliency models with dynamic Attentional Push. Our network contains four pathways, a saliency pathway and three Attentional Push pathways. The multi-pathway structure is followed by an augmenting convnet that learns to combine the complementary and time-varying outputs of the ConvLSTMs by minimizing the relative entropy between the augmented saliency and viewers fixation patterns on videos. We evaluate our model by comparing the performance of several augmented static saliency models with state-of-the-art in spatiotemporal saliency on three largest dynamic eye tracking datasets, HOLLYWOOD2, UCF-Sport and DIEM. Experimental results illustrates that solid performance gain is achievable using the proposed methodology.
引用
收藏
页码:7501 / 7511
页数:11
相关论文
共 67 条
[41]  
Li SY, 2016, 2016 IEEE INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION AND COMMUNICATION TECHNOLOGY ICEICT 2016 PROCEEDINGS, P1, DOI 10.1109/ICEICT.2016.7879641
[42]  
Liu N., 2015, IEEE C COMP VIS PATT
[43]   Spatiotemporal Saliency in Dynamic Scenes [J].
Mahadevan, Vijay ;
Vasconcelos, Nuno .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (01) :171-177
[44]  
Mancas M., 2016, From Human Attention to Computational Attention
[45]   Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition [J].
Mathe, Stefan ;
Sminchisescu, Cristian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (07) :1408-1424
[46]   Clustering of Gaze During Dynamic Scene Viewing is Predicted by Motion [J].
Mital, Parag K. ;
Smith, Tim J. ;
Hill, Robin L. ;
Henderson, John M. .
COGNITIVE COMPUTATION, 2011, 3 (01) :5-24
[47]  
Nguyen T., 2013, P 21 ACM INT C MULT
[48]   Augmented saliency model using automatic 3D head pose detection and learned gaze following in natural scenes [J].
Parks, Daniel ;
Borji, Ali ;
Itti, Laurent .
VISION RESEARCH, 2015, 116 :113-126
[49]   Novelty-based Spatiotemporal Saliency Detection for Prediction of Gaze in Egocentric Video [J].
Polatsek, Patrik ;
Benesova, Wanda ;
Paletta, Lucas ;
Perko, Roland .
IEEE SIGNAL PROCESSING LETTERS, 2016, 23 (03) :394-398
[50]   Following Gaze in Video [J].
Recasens, Adria ;
Vondrick, Carl ;
Khosla, Aditya ;
Torralba, Antonio .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1444-1452