Radar-Based Human Gait Recognition Using Dual-Channel Deep Convolutional Neural Network

被引:66
作者
Bai, Xueru [1 ]
Hui, Ye [1 ]
Wang, Li [2 ]
Zhou, Feng [2 ]
机构
[1] Xidian Univ, Natl Lab Radar Signal Proc, Xian 710071, Peoples R China
[2] Xidian Univ, Minist Educ, Key Lab Elect Informat Countermeasure & Simulat T, Xian 710071, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2019年 / 57卷 / 12期
基金
中国国家自然科学基金;
关键词
Bones; Feature extraction; Radar; Gait recognition; Torso; Legged locomotion; Training; Deep convolutional neural networks (DCNNs); dual-channel; human gait recognition; motion capture (MOCAP) data set; short-time Fourier transform (STFT); HUMAN ACTIVITY CLASSIFICATION; DOPPLER;
D O I
10.1109/TGRS.2019.2929096
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
This paper addresses the problem of radar-based human gait recognition based on the dual-channel deep convolutional neural network (DC-DCNN). To enrich the limited radar data set of human gaits and provide a benchmark for classifier training, evaluation, and comparison, it proposes an effective method for radar echo generation from the infrared, publicly accessible motion capture (MOCAP) data set. According to the different nonstationary characteristics of micro-Doppler (m-D) for the torso and limbs, it enhances their distinguishable joint timefrequency (JTF) features by applying the short-time Fourier transforms (SFTFs) with varying sliding window length and then designs the DC-DCNN structure to achieve refined human gait recognition by separate feature extraction and fusion. Experiments have shown that compared with the traditional single-channel deep convolutional neural network (SC-DCNN), the proposed method achieves higher recognition accuracy in refined human gait classification without incurring additional radar resources and could be readily extended to refined recognition of other human activities.
引用
收藏
页码:9767 / 9778
页数:12
相关论文
共 42 条
[1]  
Abramson N., 2006, PATTERN RECOGN, V103, P886
[2]  
[Anonymous], P SPIE
[3]  
[Anonymous], P SPIE
[4]  
[Anonymous], 2013, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2012.59
[5]  
[Anonymous], P SPIE
[6]  
[Anonymous], 2015, ARXIV PREPRINT ARXIV
[7]  
[Anonymous], 2016, DEEP LEARNING
[8]  
[Anonymous], ADV NEURAL INFORM PR
[9]  
[Anonymous], 2016, SENSORS BASEL, DOI DOI 10.3390/S16121990
[10]  
[Anonymous], P 14 INT C ART INT S