Deep Learning Models for Real-time Human Activity Recognition with Smartphones

被引:384
作者
Wan, Shaohua [1 ,2 ]
Qi, Lianyong [3 ]
Xu, Xiaolong [4 ]
Tong, Chao [5 ]
Gu, Zonghua [6 ,7 ]
机构
[1] Zhongnan Univ Econ & Law, Sch Informat & Safety Engn, Wuhan 430073, Peoples R China
[2] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210023, Jiangsu, Peoples R China
[3] Qufu Normal Univ, Sch Informat Sci & Engn, Rizhao 276826, Peoples R China
[4] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing 210044, Jiangsu, Peoples R China
[5] Beihang Univ, Sch Comp Sci & Engn, Beijing 100191, Peoples R China
[6] Umea Univ, Dept Appl Phys & Elect, S-90187 Umea, Sweden
[7] Zhejiang Univ, Coll Comp Sci, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Human activity recognition; Smartphone; Feature extraction; CAPTION GENERATION; INTERNET; SENSORS; EDGE;
D O I
10.1007/s11036-019-01445-x
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the widespread application of mobile edge computing (MEC), MEC is serving as a bridge to narrow the gaps between medical staff and patients. Relatedly, MEC is also moving toward supervising individual health in an automatic and intelligent manner. One of the main MEC technologies in healthcare monitoring systems is human activity recognition (HAR). Built-in multifunctional sensors make smartphones a ubiquitous platform for acquiring and analyzing data, thus making it possible for smartphones to perform HAR. The task of recognizing human activity using a smartphone's built-in accelerometer has been well resolved, but in practice, with the multimodal and high-dimensional sensor data, these traditional methods fail to identify complicated and real-time human activities. This paper designs a smartphone inertial accelerometer-based architecture for HAR. When the participants perform typical daily activities, the smartphone collects the sensory data sequence, extracts the high-efficiency features from the original data, and then obtains the user's physical behavior data through multiple three-axis accelerometers. The data are preprocessed by denoising, normalization and segmentation to extract valuable feature vectors. In addition, a real-time human activity classification method based on a convolutional neural network (CNN) is proposed, which uses a CNN for local feature extraction. Finally, CNN, LSTM, BLSTM, MLP and SVM models are utilized on the UCI and Pamap2 datasets. We explore how to train deep learning methods and demonstrate how the proposed method outperforms the others on two large public datasets: UCI and Pamap2.
引用
收藏
页码:743 / 755
页数:13
相关论文
共 44 条
[1]  
Anguita D, 2012, IWAAL, P216
[2]  
Anguita D., 2013, Esann, V3
[3]   Activity recognition from user-annotated acceleration data [J].
Bao, L ;
Intille, SS .
PERVASIVE COMPUTING, PROCEEDINGS, 2004, 3001 :1-17
[4]   GCHAR: An efficient Group-based Context-aware human activity recognition on smartphone [J].
Cao, Liang ;
Wang, Yufeng ;
Zhang, Bo ;
Jin, Qun ;
Vasilakos, Athanasios V. .
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2018, 118 :67-80
[5]   Deploying Data-intensive Applications with Multiple Services Components on Edge [J].
Chen, Yishan ;
Deng, Shuiguang ;
Ma, Hongtao ;
Yin, Jianwei .
MOBILE NETWORKS & APPLICATIONS, 2020, 25 (02) :426-441
[6]   Performance Analysis of Smartphone-Sensor Behavior for Human Activity Recognition [J].
Chen, Yufei ;
Shen, Chao .
IEEE ACCESS, 2017, 5 :3095-3110
[7]   Robust Human Activity Recognition Using Smartphone Sensors via CT-PCA and Online SVM [J].
Chen, Zhenghua ;
Zhu, Qingchang ;
Soh, Yeng Chai ;
Zhang, Le .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2017, 13 (06) :3070-3080
[8]  
Chen Zhenghua, 2018, IEEE T IND INFORM
[9]  
Cheng WY, 2018, AAAI CONF ARTIF INTE, P2151
[10]   Stimulus-driven and concept-driven analysis for image caption generation [J].
Ding, Songtao ;
Qu, Shiru ;
Xi, Yuling ;
Wan, Shaohua .
NEUROCOMPUTING, 2020, 398 :520-530