Human activity recognition from sensor data using spatial attention-aided CNN with genetic algorithm

被引:40
作者
Sarkar, Apu [1 ]
Hossain, S. K. Sabbir [1 ]
Sarkar, Ram [1 ]
机构
[1] Jadavpur Univ, Dept Comp Sci & Engn, Kolkata, India
关键词
Human activity recognition; Continuous wavelet transform; Deep learning; Spatial attention; Genetic Algorithm; Feature selection; Filter method; RECURRENT NEURAL-NETWORK; FEATURE-SELECTION; BEHAVIOR RECOGNITION; MUTUAL INFORMATION; ROBUST; CLASSIFICATION; SEGMENTATION;
D O I
10.1007/s00521-022-07911-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Capturing time and frequency relationships of time series signals offers an inherent barrier for automatic human activity recognition (HAR) from wearable sensor data. Extracting spatiotemporal context from the feature space of the sensor reading sequence is challenging for the current recurrent, convolutional, or hybrid activity recognition models. The overall classification accuracy also gets affected by large size feature maps that these models generate. To this end, in this work, we have put forth a hybrid architecture for wearable sensor data-based HAR. We initially use Continuous Wavelet Transform to encode the time series of sensor data as multi-channel images. Then, we utilize a Spatial Attention-aided Convolutional Neural Network (CNN) to extract higher-dimensional features. To find the most essential features for recognizing human activities, we develop a novel feature selection (FS) method. In order to identify the fitness of the features for the FS, we first employ three filter-based methods: Mutual Information (MI), Relief-F, and minimum redundancy maximum relevance (mRMR). The best set of features is then chosen by removing the lower-ranked features using a modified version of the Genetic Algorithm (GA). The K-Nearest Neighbors (KNN) classifier is then used to categorize human activities. We conduct comprehensive experiments on five well-known, publicly accessible HAR datasets, namely UCI-HAR, WISDM, MHEALTH, PAMAP2, and HHAR. Our model significantly outperforms the state-of-the-art models in terms of classification performance. We also observe an improvement in overall recognition accuracy with the use of GA-based FS technique with a lower number of features. The source code of the paper is publicly available here.
引用
收藏
页码:5165 / 5191
页数:27
相关论文
共 121 条
  • [1] Deep Learning for Heterogeneous Human Activity Recognition in Complex IoT Applications
    Abdel-Basset, Mohamed
    Hawash, Hossam
    Chang, Victor
    Chakrabortty, Ripon K.
    Ryan, Michael
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (08) : 5653 - 5665
  • [2] A Lightweight Deep Learning Model for Human Activity Recognition on Edge Devices
    Agarwal, Preeti
    Alam, Mansaf
    [J]. INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND DATA SCIENCE, 2020, 167 : 2364 - 2373
  • [3] Inertial Sensor Data to Image Encoding for Human Action Recognition
    Ahmad, Zeeshan
    Khan, Naimul
    [J]. IEEE SENSORS JOURNAL, 2021, 21 (09) : 10978 - 10988
  • [4] Al-Hatab M., 2020, TEST ENG MANAG, V11, P3035
  • [5] Automatic hate speech detection using killer natural language processing optimizing ensemble deep learning approach
    Al-Makhadmeh, Zafer
    Tolba, Amr
    [J]. COMPUTING, 2020, 102 (02) : 501 - 522
  • [6] Amma N.G., 2012, 2012 International Conference on Computing, Communication and Applications, P1
  • [7] Anguita D., 2013, ESANN, P24
  • [8] Genetic algorithm-based oversampling approach to prune the class imbalance issue in software defect prediction
    Arun, C.
    Lakshmi, C.
    [J]. SOFT COMPUTING, 2022, 26 (23) : 12915 - 12931
  • [9] Awal MA, 2019, 2019 4 INT C ELECT I, P1
  • [10] Motion2Vector: Unsupervised Learning in Human Activity Recognition Using Wrist-Sensing Data
    Bai, Lu
    Yeung, Chris
    Efstratiou, Christos
    Chikomo, Moyra
    [J]. UBICOMP/ISWC'19 ADJUNCT: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2019, : 537 - 542