Comparing Human Activity Recognition Models Based on Complexity and Resource Usage

被引:15
作者
Angerbauer, Simon [1 ]
Palmanshofer, Alexander [1 ]
Selinger, Stephan [1 ]
Kurz, Marc [1 ]
机构
[1] Univ Appl Sci Upper Austria, Dept Mobil & Energy, A-4232 Hagenberg, Austria
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 18期
关键词
human activity recognition; machine learning; deep learning; CNN; RNN; model complexity; ACCELEROMETER DATA;
D O I
10.3390/app11188473
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Human Activity Recognition (HAR) is a field with many contrasting application domains, from medical applications to ambient assisted living and sports applications. With ever-changing use cases and devices also comes a need for newer and better HAR approaches. Machine learning has long been one of the predominant techniques to recognize activities from extracted features. With the advent of deep learning techniques that push state of the art results in many different domains like natural language processing or computer vision, researchers have also started to build deep neural nets for HAR. With this increase in complexity, there also comes a necessity to compare the newer approaches to the previous state of the art algorithms. Not everything that is new is also better. Therefore, this paper aims to compare typical machine learning models like a Random Forest (RF) or a Support Vector Machine (SVM) to two commonly used deep neural net architectures, Convolutional Neural Nets (CNNs) and Recurrent Neural Nets (RNNs). Not only in regards to performance but also in regards to the complexity of the models. We measure complexity as the memory consumption, the mean prediction time and the number of trainable parameters of the models. To achieve comparable results, the models are all tested on the same publicly available dataset, the UCI HAR Smartphone dataset. With this combination of prediction performance and model complexity, we look for the models achieving the best possible performance/complexity tradeoff and therefore being the most favourable to be used in an application. According to our findings, the best model for a strictly memory limited use case is the Random Forest with an F1-Score of 88.34%, memory consumption of only 0.1 MB and mean prediction time of 0.22 ms. The overall best model in terms of complexity and performance is the SVM with a linear kernel with an F1-Score of 95.62%, memory consumption of 2 MB and a mean prediction time of 0.47 ms. The two deep neural nets are on par in terms of performance, but their increased complexity makes them less favourable to be used.
引用
收藏
页数:29
相关论文
共 32 条
[1]  
Almaslukh B, 2017, INT J COMPUT SCI NET, V17, P160
[2]  
Anguita D., 2013, ESANN, V3, P3
[3]  
[Anonymous], ARXIV160408880
[4]   Activity recognition from user-annotated acceleration data [J].
Bao, L ;
Intille, SS .
PERVASIVE COMPUTING, PROCEEDINGS, 2004, 3001 :1-17
[5]   A Study on Human Activity Recognition Using Accelerometer Data from Smartphones [J].
Bayat, Akram ;
Pomplun, Marc ;
Tran, Duc A. .
9TH INTERNATIONAL CONFERENCE ON FUTURE NETWORKS AND COMMUNICATIONS (FNC'14) / THE 11TH INTERNATIONAL CONFERENCE ON MOBILE SYSTEMS AND PERVASIVE COMPUTING (MOBISPC'14) / AFFILIATED WORKSHOPS, 2014, 34 :450-457
[6]   Video-understanding framework for automatic behavior recognition [J].
Bremond, Francois ;
Thonnat, Monique ;
Zuniga, Marcos .
BEHAVIOR RESEARCH METHODS, 2006, 38 (03) :416-426
[7]   A Novel Human Activity Recognition Scheme for Smart Health Using Multilayer Extreme Learning Machine [J].
Chen, Maojian ;
Li, Ying ;
Luo, Xiong ;
Wang, Weiping ;
Wang, Long ;
Zhao, Wenbing .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (02) :1410-1418
[8]   Deep Neural Networks for Sensor-Based Human Activity Recognition Using Selective Kernel Convolution [J].
Gao, Wenbin ;
Zhang, Lei ;
Huang, Wenbo ;
Min, Fuhong ;
He, Jun ;
Song, Aiguo .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[9]   Activity Recognition from acceleration data Based on Discrete Consine Transform and SVM [J].
He, Zhenyu ;
Jin, Lianwen .
2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, :5041-5044
[10]  
Ho J., 2005, P SIGCHI C HUM FACT, P909, DOI DOI 10.1145/1054972.1055100