Lightweight human activity recognition method based on the MobileHARC model

被引:1
作者
Gong, Xingyu [1 ]
Zhang, Xinyang [1 ]
Li, Na [1 ]
机构
[1] Xian Univ Sci & Technol, Dept Comp Sci & Technol, Xian, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Human activity recognition; sensors; lightweight model; transformer; NETWORK; SENSORS;
D O I
10.1080/21642583.2024.2328549
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, Human activity recognition (HAR) based on wearable devices has been widely applied in health applications and other fields. Currently, most HAR models are based on the Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), or their combination. Recently, there have been proposals based on Transformer and its variant models. However, due to the fact that these models have sequential network structures and are unable to simultaneously focus on local and global features, thus, resulting in a reduction in recognition performance. In addition, along with the substantial computational resources required by Transformers, they are not suitable for resource-constrained devices. In this paper, the primary distinction of our proposed model from other hybrid models that combine CNN and Transformer is that our model adopts a completely new parallel network architecture and primarily focuses on lightweight design. Particularly, We proposed the Mobile Human Activity Recognition Conformer (MobileHARC), which adopts the parallel structure with a lightweight Transformer and CNN as the backbone networks. Furthermore, we proposed the Inverted Residual Lightweight Convolution Block and Multiscale Lightweight Multi-Head Self-Attention Mechanism. We systematically evaluate the proposed models on four public datasets. Experimental results show that MobileHARC achieves superior recognition performance, and uses fewer Floating-Point Operations per Second (FLOPs) and parameters compared to current models.
引用
收藏
页数:15
相关论文
共 59 条
  • [1] Attend and Discriminate: Beyond the State-of-the-Art for Human Activity Recognition UsingWearable Sensors
    Abedin, Alireza
    Ehsanpour, Mahsa
    Shi, Qinfeng
    Rezatofighi, Hamid
    Ranasinghe, Damith C.
    [J]. PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2021, 5 (01):
  • [2] A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition
    Almaslukh, Bandar
    Artoli, Abdel Monim
    Al-Muhtadi, Jalal
    [J]. SENSORS, 2018, 18 (11)
  • [3] Betancourt C, 2020, IEEE SYS MAN CYBERN, P1194, DOI [10.1109/smc42975.2020.9283381, 10.1109/SMC42975.2020.9283381]
  • [4] DGRU based human activity recognition using channel state information
    Bokhari, Syed Mohsin
    Sohaib, Sarmad
    Khan, Ahsan Raza
    Shafi, Muhammad
    Khan, Atta Ur Rehman
    [J]. MEASUREMENT, 2021, 167
  • [5] The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition
    Chavarriaga, Ricardo
    Sagha, Hesam
    Calatroni, Alberto
    Digumarti, Sundara Tejaswi
    Troester, Gerhard
    Millan, Jose del R.
    Roggen, Daniel
    [J]. PATTERN RECOGNITION LETTERS, 2013, 34 (15) : 2033 - 2042
  • [6] Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities
    Chen, Kaixuan
    Zhang, Dalin
    Yao, Lina
    Guo, Bin
    Yu, Zhiwen
    Liu, Yunhao
    [J]. ACM COMPUTING SURVEYS, 2021, 54 (04)
  • [7] Chen W., 2017, IEEE INT C B HLTH NE
  • [8] Cho KYHY, 2014, Arxiv, DOI arXiv:1406.1078
  • [9] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [10] da Silva FG, 2013, 2013 5TH IEEE INTERNATIONAL WORKSHOP ON ADVANCES IN SENSORS AND INTERFACES (IWASI), P20, DOI 10.1109/IWASI.2013.6576063