Conflux LSTMs Network: A Novel Approach for Multi-View Action Recognition

被引:50
作者
Ullah, Amin [1 ]
Muhammad, Khan [2 ]
Hussain, Tanveer [1 ]
Baik, Sung Wook [1 ]
机构
[1] Sejong Univ, Seoul, South Korea
[2] Sejong Univ, Dept Software, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Artificial intelligence; Deep learning; Action recognition; Multi-view video analytics; Sequence learning; LSTM; CNN; Multi-view action recognition; NEURAL-NETWORKS; SURVEILLANCE; FEATURES;
D O I
10.1016/j.neucom.2019.12.151
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-view action recognition (MVAR) is an optimal technique to acquire numerous clues from different views data for effective action recognition, however, it is not well explored yet. There exist several challenges to MVAR domain such as divergence in viewpoints, invisible regions, and different scales of appearance in each view require better solutions for real world applications. In this paper, we present a conflux long short-term memory (LSTMs) network to recognize actions from multi-view cameras. The proposed framework has four major steps; 1) frame level feature extraction, 2) its propagation through conflux LSTMs network for view self-reliant patterns learning, 3) view inter-reliant patterns learning and correlation computation, and 4) action classification. First, we extract deep features from a sequence of frames using a pre-trained VGG19 CNN model for each view. Second, we forward the extracted features to conflux LSTMs network to learn the view self-reliant patterns. In the next step, we compute the inter-view correlations using the pairwise dot product from output of the LSTMs network corresponding to different views to learn the view inter-reliant patterns. In the final step, we use flatten layers followed by SoftMax classifier for action recognition. Experimental results over benchmark datasets compared to state-of-the-art report an increase of 3% and 2% on northwestern-UCLA and MCAD datasets, respectively. (c) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:321 / 329
页数:9
相关论文
共 43 条
[1]   Object-oriented convolutional features for fine-grained image retrieval in large surveillance datasets [J].
Ahmad, Jamil ;
Muhammad, Khan ;
Bakshi, Sambit ;
Baik, Sung Wook .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2018, 81 :314-330
[2]  
[Anonymous], 2019, ARXIV190605910
[3]   Glimpse Clouds: Human Activity Recognition from Unstructured Feature Points [J].
Baradel, Fabien ;
Wolf, Christian ;
Mille, Julien ;
Taylor, Graham W. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :469-478
[4]   WiFi CSI Based Passive Human Activity Recognition Using Attention Based BLSTM [J].
Chen, Zhenghua ;
Zhang, Le ;
Jiang, Chaoyang ;
Cao, Zhiguang ;
Cui, Wei .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2019, 18 (11) :2714-2724
[5]   Distilling the Knowledge From Handcrafted Features for Human Activity Recognition [J].
Chen, Zhenghua ;
Zhang, Le ;
Cao, Zhiguang ;
Guo, Jing .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2018, 14 (10) :4334-4342
[6]   FlowNet: Learning Optical Flow with Convolutional Networks [J].
Dosovitskiy, Alexey ;
Fischer, Philipp ;
Ilg, Eddy ;
Haeusser, Philip ;
Hazirbas, Caner ;
Golkov, Vladimir ;
van der Smagt, Patrick ;
Cremers, Daniel ;
Brox, Thomas .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2758-2766
[7]  
Du Y, 2015, PROC CVPR IEEE, P1110, DOI 10.1109/CVPR.2015.7298714
[8]   Log-Euclidean bag of words for human action recognition [J].
Faraki, Masoud ;
Palhang, Maziar ;
Sanderson, Conrad .
IET COMPUTER VISION, 2015, 9 (03) :331-339
[9]   MMA: a multi-view and multi-modality benchmark dataset for human action recognition [J].
Gao, Zan ;
Han, Tao-tao ;
Zhang, Hua ;
Xue, Yan-bing ;
Xu, Guang-ping .
MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (22) :29383-29404
[10]   3D Pose from Motion for Cross-view Action Recognition via Non-linear Circulant Temporal Encoding [J].
Gupta, Ankur ;
Martinez, Julieta ;
Little, James J. ;
Woodham, Robert J. .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :2601-2608