Two-Stream Collaborative Learning With Spatial-Temporal Attention for Video Classification

被引:91
作者
Peng, Yuxin [1 ]
Zhao, Yunzhen [1 ]
Zhang, Junchao [1 ]
机构
[1] Peking Univ, Inst Comp Sci & Technol, Beijing 100871, Peoples R China
基金
中国国家自然科学基金;
关键词
Video classification; static-motion collaborative learning; spatial-temporal attention; adaptively weighted learning; ACTION RECOGNITION; REPRESENTATION; HISTOGRAMS; FLOW;
D O I
10.1109/TCSVT.2018.2808685
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Video classification is highly important and has widespread applications, such as video search and intelligent surveillance. Video naturally contains both static and motion information, which can be represented by frames and optical flow, respectively. Recently, researchers have generally adopted deep networks to capture the static and motion information separately, which has two main limitations. First, the coexistence relationship between spatial and temporal attention is ignored, although they should be jointly modeled as the spatial and temporal evolutions of video to learn discriminative video features. Second, the strong complementarity between static and motion information is ignored, although they should be collaboratively learned to enhance each other. To address the above two limitations, this paper proposes the two-stream collaborative learning with spatial-temporal attention (TCLSTA) approach, which consists of two models. First, for the spatial-temporal attention model, the spatial-level attention emphasizes the salient regions in a frame, and the temporal-level attention exploits the discriminative frames in a video. They are mutually enhanced to jointly learn the discriminative static and motion features for better classification performance. Second, for the static-motion collaborative model, it not only achieves mutual guidance between static and motion information to enhance the feature learning but also adaptively learns the fusion weights of static and motion streams, thus exploiting the strong complementarity between static and motion information to improve video classification. Experiments on four widely used data sets show that our TCLSTA approach achieves the best performance compared with more than 10 state-of-the-art methods.
引用
收藏
页码:773 / 786
页数:14
相关论文
共 80 条
[1]  
[Anonymous], P 3 INT C LEARNING R
[2]  
[Anonymous], 2014, ADV NEURAL INFORM PR
[3]  
[Anonymous], 2010, THESIS
[4]  
[Anonymous], 2013, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2012.59
[5]  
[Anonymous], PROC CVPR IEEE
[6]  
[Anonymous], 2015, Nature, DOI [10.1038/nature14539, DOI 10.1038/NATURE14539]
[7]  
[Anonymous], UNTRIMMEDNETS WEAKLY
[8]  
[Anonymous], IEEE T CIRC SYST VID
[9]  
[Anonymous], 2017, COMMUN ACM, DOI DOI 10.1145/3065386
[10]  
[Anonymous], 2008, BMVC