Human action recognition based on multi-scale feature maps from depth video sequences

被引:7
作者
Li, Chang [1 ]
Huang, Qian [1 ]
Li, Xing [1 ]
Wu, Qianhan [1 ]
机构
[1] Hohai Univ, Sch Comp & Informat, Nanjing, Peoples R China
关键词
Action recognition; Laplacian pyramid; Multi-scale motion representation; Extreme learning machine; MOTION; SCALE; CLASSIFICATION; INFORMATION; TEXTURE;
D O I
10.1007/s11042-021-11193-4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human action recognition is an active research area in computer vision. Although great progress has been made, previous methods mostly recognize actions from depth video sequences at only one scale, and thus they often neglect multi-scale spatial changes that provide additional information in practical applications. In this paper, we present a novel framework with a multi-scale mechanism to improve scale diversity of motion features. We propose a multi-scale feature map called Laplacian pyramid depth motion images(LP-DMI). First, We employ depth motion images (DMI) as the templates to generate the multi-scale static representation of actions. Then, we caculate LP-DMI to enhance multi-scale dynamic information of motions and reduce redundant static information in human bodies. We further extract the multi-granularity descriptor called LP-DMI-HOG to provide more discriminative features. Finally, we utilize extreme learning machine (ELM) for action classification. The proposed method yeilds the recognition accuracy of 93.41%, 85.12%, 91.94% on the public MSRAction3D, UTD-MHAD and DHA dataset. Through extensive experiments, we prove that our method outperforms the state-of-the-art benchmarks.
引用
收藏
页码:32111 / 32130
页数:20
相关论文
共 63 条
[1]   Human Action Recognition Using Convolutional Neural Network and Depth Sensor Data [J].
Ahmad, Zeeshan ;
Illanko, Kandasamy ;
Khan, Naimul ;
Androutsos, Dimitri .
2019 INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND COMPUTER COMMUNICATIONS (ITCC 2019), 2019, :1-5
[2]  
Alpatov AV, 2018, MEDD C EMBED COMPUT, P579
[3]   Human action recognition using bag of global and local Zernike moment features [J].
Aly, Saleh ;
Sayed, Asmaa .
MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (17) :24923-24953
[4]   The recognition of human movement using temporal templates [J].
Bobick, AF ;
Davis, JW .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2001, 23 (03) :257-267
[5]   Human action recognition using MHI and SHI based GLAC features and Collaborative Representation Classifier [J].
Bulbul, Mohammad Farhad ;
Islam, Saiful ;
Ali, Hazrat .
JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2019, 36 (04) :3385-3401
[6]   THE LAPLACIAN PYRAMID AS A COMPACT IMAGE CODE [J].
BURT, PJ ;
ADELSON, EH .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1983, 31 (04) :532-540
[7]   Gradient Local Auto-Correlations and Extreme Learning Machine for Depth-Based Activity Recognition [J].
Chen, Chen ;
Hou, Zhenjie ;
Zhang, Baochang ;
Jiang, Junjun ;
Yang, Yun .
ADVANCES IN VISUAL COMPUTING, PT I (ISVC 2015), 2015, 9474 :613-623
[8]  
Chen C, 2015, IEEE IMAGE PROC, P168, DOI 10.1109/ICIP.2015.7350781
[9]   Action Recognition from Depth Sequences Using Depth Motion Maps-based Local Binary Patterns [J].
Chen, Chen ;
Jafari, Roozbeh ;
Kehtarnavaz, Nasser .
2015 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2015, :1092-1099
[10]   Skeleton-Based Action Recognition with Shift Graph Convolutional Network [J].
Cheng, Ke ;
Zhang, Yifan ;
He, Xiangyu ;
Chen, Weihan ;
Cheng, Jian ;
Lu, Hanqing .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :180-189