SST: Single-Stream Temporal Action Proposals

被引:281
作者
Buch, Shyamal [1 ]
Escorcia, Victor [2 ]
Shen, Chuanqi [1 ]
Ghanem, Bernard [2 ]
Niebles, Juan Carlos [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
[2] KAUST, Thuwal, Saudi Arabia
来源
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) | 2017年
关键词
D O I
10.1109/CVPR.2017.675
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Our paper presents a new approach for temporal detection of human actions in long, untrimmed video sequences. We introduce Single-Stream Temporal Action Proposals (SST), a new effective and efficient deep architecture for the generation of temporal action proposals. Our network can run continuously in a single stream over very long input video sequences, without the need to divide input into short overlapping clips or temporal windows for batch processing. We demonstrate empirically that our model outperforms the state-of-the-art on the task of temporal action proposal generation, while achieving some of the fastest processing speeds in the literature. Finally, we demonstrate that using SST proposals in conjunction with existing action classifiers results in improved state-of-the-art temporal action detection performance.
引用
收藏
页码:6373 / 6382
页数:10
相关论文
共 40 条
[1]  
Alexe B, 2010, PROC CVPR IEEE, P73, DOI 10.1109/CVPR.2010.5540226
[2]  
[Anonymous], 2014, P 8 WORKSH SYNT SEM
[3]  
[Anonymous], 2016, CVPR
[4]  
[Anonymous], P NEURAL INFORM PROC
[5]  
[Anonymous], ARXIV160400427
[6]  
[Anonymous], 2015, ABS150705738 CORR
[7]  
[Anonymous], 2015, P IEEE INT C COMPUTE
[8]  
[Anonymous], PAMI
[9]  
[Anonymous], J MACHINE LEARNING R
[10]  
[Anonymous], ACM INT C MULT RETR