A Deep Learning-Based Semantic Segmentation Model Using MCNN and Attention Layer for Human Activity Recognition

被引:8
作者
Lee, Sang-hyub [1 ]
Lee, Deok-Won [1 ]
Kim, Mun Sang [1 ]
机构
[1] Gwangju Inst Sci & Technol, Sch Integrated Technol, Gwangju 61005, South Korea
关键词
human activity recognition; transitional activities; deep learning; accelerometer sensor; attention layer; semantic segmentation;
D O I
10.3390/s23042278
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
With the development of wearable devices such as smartwatches, several studies have been conducted on the recognition of various human activities. Various types of data are used, e.g., acceleration data collected using an inertial measurement unit sensor. Most scholars segmented the entire timeseries data with a fixed window size before performing recognition. However, this approach has limitations in performance because the execution time of the human activity is usually unknown. Therefore, there have been many attempts to solve this problem through the method of activity recognition by sliding the classification window along the time axis. In this study, we propose a method for classifying all frames rather than a window-based recognition method. For implementation, features extracted using multiple convolutional neural networks with different kernel sizes were fused and used. In addition, similar to the convolutional block attention module, an attention layer to each channel and spatial level is applied to improve the model recognition performance. To verify the performance of the proposed model and prove the effectiveness of the proposed method on human activity recognition, evaluation experiments were performed. For comparison, models using various basic deep learning modules and models, in which all frames were classified for recognizing a specific wave in electrocardiography data were applied. As a result, the proposed model reported the best F1-score (over 0.9) for all kinds of target activities compared to other deep learning-based recognition models. Further, for the improvement verification of the proposed CEF method, the proposed method was compared with three types of SW method. As a result, the proposed method reported the 0.154 higher F1-score than SW. In the case of the designed model, the F1-score was higher as much as 0.184.
引用
收藏
页数:19
相关论文
共 32 条
[1]  
Abdel-Salam Reem, 2021, Deep Learning for Human Activity Recognition: Second International Workshop, DL-HAR 2020, Held in Conjunction with IJCAI-PRICAI 2020. Communications in Computer and Information Science (1370), P1, DOI 10.1007/978-981-16-0575-8_1
[2]  
Anguita D., 2013, 21 EUROPEAN S ARTIFI, V3, P437
[3]   Potentials of enhanced context awareness in wearable assistants for Parkinson's disease patients with the freezing of gait syndrome [J].
Baechlin, Marc ;
Roggen, Daniel ;
Troester, Gerhard ;
Plotnik, Meir ;
Inbar, Noit ;
Meidan, Inbal ;
Herman, Talia ;
Brozgol, Marina ;
Shaviv, Eliya ;
Giladi, Nir ;
Hausdorff, Jeffrey M. .
2009 INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, PROCEEDINGS, 2009, :123-+
[4]  
Bai S., 2018, An empirical evaluation of generic convolutional and recurrent networks for 2018
[5]   The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition [J].
Chavarriaga, Ricardo ;
Sagha, Hesam ;
Calatroni, Alberto ;
Digumarti, Sundara Tejaswi ;
Troester, Gerhard ;
Millan, Jose del R. ;
Roggen, Daniel .
PATTERN RECOGNITION LETTERS, 2013, 34 (15) :2033-2042
[6]   Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities [J].
Chen, Kaixuan ;
Zhang, Dalin ;
Yao, Lina ;
Guo, Bin ;
Yu, Zhiwen ;
Liu, Yunhao .
ACM COMPUTING SURVEYS, 2021, 54 (04)
[7]   Divide and Conquer-Based 1D CNN Human Activity Recognition Using Test Data Sharpening [J].
Cho, Heeryon ;
Yoon, Sang Min .
SENSORS, 2018, 18 (04)
[8]  
Demrozi F, 2020, IEEE ACCESS, V8, P210816, DOI [10.1109/ACCESS.2020.3037715, 10.1109/access.2020.3037715]
[9]  
Grzeszick Rene., 2017, P 4 INT WORKSH SENS, P1, DOI DOI 10.1145/3134230.3134231
[10]  
Hammerla N.Y., 2016, ARXIV