An efficient 3D convolutional neural network with informative 3D volumes for human activity recognition using wearable sensors

被引:1
作者
Zebhi, Saeedeh [1 ]
机构
[1] Yazd Univ, Elect Engn Dept, Yazd, Iran
基金
英国科研创新办公室;
关键词
Continuous wavelet transform; 3D-CNNs; Action recognition; Short-time fourier transform;
D O I
10.1007/s11042-023-17400-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Short-Time Fourier Transform (STFT) and Continuous Wavelet Transform (CWT) are two popular transforms which can be used to find time-frequency representations. By using them, one-dimensional signals acquired from different axes or sensors are mapped to time-frequency representations. These representations can construct 3D volumes which include time-frequency information of signals. Recently, the advantage of 3D convolutional neural networks (3D-CNNs) for video classification causes to incorporate them with the 3D volumes. Based on this opinion, a novel method composed of two basic methods is proposed in this paper. The magnitude of the STFT and the CWT are used for constructing 3D volumes in basic methods. Also, a developed 3D-CNN is applied for classifying. Two streams of these 3D volumes are fused in the proposed method. It attains the accuracies of 96.61%, 97.77%, 99.65% and 98.32% for UCI HAR, MOTIONSENSE, MHEALTH and WISDM datasets, respectively. Achieved results demonstrate the superiority of the proposed method compared with state-of-the-art approaches.
引用
收藏
页码:42233 / 42256
页数:24
相关论文
共 37 条
[1]  
Aljarrah Amir A., 2019, 2019 2nd International Conference on Engineering Technology and its Applications (IICETA), P156, DOI 10.1109/IICETA47481.2019.9012979
[2]  
Anguita Davide., 2013, ESANN, V3, page, P3
[3]   Design, implementation and validation of a novel open framework for agile development of mobile health applications [J].
Banos, Oresti ;
Villalonga, Claudia ;
Garcia, Rafael ;
Saez, Alejandro ;
Damas, Miguel ;
Holgado-Terriza, Juan A. ;
Lee, Sungyong ;
Pomares, Hector ;
Rojas, Ignacio .
BIOMEDICAL ENGINEERING ONLINE, 2015, 14
[4]  
Batool Mouazma, 2019, 2019 International Conference on Applied and Engineering Mathematics (ICAEM), P145, DOI 10.1109/ICAEM.2019.8853770
[5]   A Semisupervised Recurrent Convolutional Attention Model for Human Activity Recognition [J].
Chen, Kaixuan ;
Yao, Lina ;
Zhang, Dalin ;
Wang, Xianzhi ;
Chang, Xiaojun ;
Nie, Feiping .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (05) :1747-1756
[6]   Sensor-Based Activity Recognition [J].
Chen, Liming ;
Hoey, Jesse ;
Nugent, Chris D. ;
Cook, Diane J. ;
Yu, Zhiwen .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2012, 42 (06) :790-808
[7]  
Daubechies I., 1992, 10 LECT WAVELETS, DOI DOI 10.1137/1.9781611970104
[8]  
Deep Samundra, 2019, 2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT), P259, DOI 10.1109/PDCAT46702.2019.00055
[9]   Multi-input CNN-GRU based human activity recognition using wearable sensors [J].
Dua, Nidhi ;
Singh, Shiva Nand ;
Semwal, Vijay Bhaskar .
COMPUTING, 2021, 103 (07) :1461-1478
[10]  
Hajihassani O, 2021, Learning representations for anonymizing sensor data in IoT applications