Improved two-stream model for human action recognition

被引:0
作者
Yuxuan Zhao
Ka Lok Man
Jeremy Smith
Kamran Siddique
Sheng-Uei Guan
机构
[1] Department of Computer Science and Software Engineering,
[2] Xi’an Jiaotong-Liverpool University,undefined
[3] imec-DistriNet,undefined
[4] KU Leuven,undefined
[5] Swinburne University of Technology,undefined
[6] Department of Electrical Engineering and Electronics,undefined
[7] University of Liverpool,undefined
[8] Department of Information and Communication Technology,undefined
[9] Xiamen University Malaysia,undefined
来源
EURASIP Journal on Image and Video Processing | / 2020卷
关键词
Action recognition; Two-stream CNN model; Spatial stream; LSTM-based model;
D O I
暂无
中图分类号
学科分类号
摘要
This paper addresses the recognitions of human actions in videos. Human action recognition can be seen as the automatic labeling of a video according to the actions occurring in it. It has become one of the most challenging and attractive problems in the pattern recognition and video classification fields. The problem itself is difficult to solve by traditional video processing methods because of several challenges such as the background noise, sizes of subjects in different videos, and the speed of actions. Derived from the progress of deep learning methods, several directions are developed to recognize a human action from a video, such as the long-short-term memory (LSTM)-based model, two-stream convolutional neural network (CNN) model, and the convolutional 3D model.In this paper, we focus on the two-stream structure. The traditional two-stream CNN network solves the problem that CNNs do not have satisfactory performance on temporal features. By training a temporal stream, which uses the optical flow as the input, a CNN can have the ability to extract temporal features. However, the optical flow only contains limited temporal information because it only records the movements of pixels on the x-axis and the y-axis. Therefore, we attempt to design and implement a new two-stream model by using an LSTM-based model in its spatial stream to extract both spatial and temporal features in RGB frames. In addition, we implement a DenseNet in the temporal stream to improve the recognition accuracy. This is in-contrast to traditional approaches which typically utilize the spatial stream for extracting only spatial features. The quantitative evaluation and experiments are conducted on the UCF-101 dataset, which is a well-developed public video dataset. For the temporal stream, we choose the optical flow of UCF-101. Images in the optical flow are provided by the Graz University of Technology. The experimental result shows that the proposed method outperforms the traditional two-stream CNN method with an accuracy of at least 3%. For both spatial and temporal streams, the proposed model also achieves higher recognition accuracies. In addition, compared with the state of the art methods, the new model can still have the best recognition performance.
引用
收藏
相关论文
共 31 条
[1]  
Hongeng S.(2004)Video-based event recognition: activity representation and probabilistic recognition methods Comput. Vis. Image Underst. 96 129-162
[2]  
Nevatia R.(2019)A comprehensive survey of vision-based human action recognition methods Sensors 19 1005-231
[3]  
Bremond F.(2012)3D convolutional neural networks for human action recognition IEEE Trans. Pattern Anal. Mach. Intell. 35 221-450
[4]  
Zhang H. -B.(2012)Slow feature analysis for human action recognition IEEE Trans. Pattern Anal. Mach. Intell. 3 436-249
[5]  
Zhang Y. -X.(2003)Model selection for support vector machine classification Neurocomputing 55 221-1780
[6]  
Zhong B.(1997)Long short-term memory Neural Comput. 9 1735-16645
[7]  
Lei Q.(2018)Natural language description of video streams using task-specific feature encoding IEEE Access 6 16639-25
[8]  
Yang L.(2016)Cooperative mobile video transmission for traffic surveillance in smart cities Comput. Electr. Eng. 54 16-undefined
[9]  
Du J. -X.(undefined)undefined undefined undefined undefined-undefined
[10]  
Chen D. -S.(undefined)undefined undefined undefined undefined-undefined