STRNet: Triple-stream Spatiotemporal Relation Network for Action Recognition

被引:0
作者
Zhi-Wei Xu
Xiao-Jun Wu
Josef Kittler
机构
[1] Jiangnan University,School of Artificial Intelligence and Computer Science
[2] Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence,Centre for Vision, Speech and Signal Processing
[3] University of Surrey,undefined
来源
International Journal of Automation and Computing | 2021年 / 18卷
关键词
Action recognition; spatiotemporal relation; multi-branch fusion; long-term representation; video classification;
D O I
暂无
中图分类号
学科分类号
摘要
Learning comprehensive spatiotemporal features is crucial for human action recognition. Existing methods tend to model the spatiotemporal feature blocks in an integrate-separate-integrate form, such as appearance-and-relation network (ARTNet) and spatiotemporal and motion network (STM). However, with blocks stacking up, the rear part of the network has poor interpretability. To avoid this problem, we propose a novel architecture called spatial temporal relation network (STRNet), which can learn explicit information of appearance, motion and especially the temporal relation information. Specifically, our STRNet is constructed by three branches, which separates the features into 1) appearance pathway, to obtain spatial semantics, 2) motion pathway, to reinforce the spatiotemporal feature representation, and 3) relation pathway, to focus on capturing temporal relation details of successive frames and to explore long-term representation dependency. In addition, our STRNet does not just simply merge the multi-branch information, but we apply a flexible and effective strategy to fuse the complementary information from multiple pathways. We evaluate our network on four major action recognition benchmarks: Kinetics-400, UCF-101, HMDB-51, and Something-Something v1, demonstrating that the performance of our STRNet achieves the state-of-the-art result on the UCF-101 and HMDB-51 datasets, as well as a comparable accuracy with the state-of-the-art method on Something-Something v1 and Kinetics-400.
引用
收藏
页码:718 / 730
页数:12
相关论文
共 52 条
[1]  
LeCun Y(2015)Deep learning Nature 521 436-444
[2]  
Bengio Y(2018)Advanced deep-learning techniques for salient and category-specific object detection: A survey IEEE Signal Processing Magazine 35 84-100
[3]  
Hinton G(2017)Fully convolutional networks for semantic segmentation IEEE Transactions on Pattern Analysis and Machine Intelligence 39 640-651
[4]  
Han J W(2014)Study of human action recognition based on improved spatio-temporal features International Journal of Automation and Computing 11 500-509
[5]  
Zhang D W(2019)Action-stage emphasized spatiotemporal VLAD for video action recognition IEEE Transactions on Image Processing 28 2799-2812
[6]  
Cheng G(2005)On space-time interest points International Journal of Computer Vision 64 107-123
[7]  
Liu N A(2016)MoFAP: A multi-level representation for action recognition International Journal of Computer Vision 119 254-271
[8]  
Xu D(2020)Temporal-spatial mapping for action recognition IEEE Transactions on Circuits and Systems for Video Technology 30 748-759
[9]  
Shelhamer E(2012)3D convolutional neural networks for human action recognition IEEE Transactions on Pattern Analysis and Machine Intelligence 35 221-231
[10]  
Long J(2018)Beyond joints: Learning representations from primitive geometries for skeleton-based action recognition and detection IEEE Transactions on Image Processing 27 4382-4394