Human behavior recognition based on sparse transformer with channel attention mechanism

被引:1
|
作者
Cao, Keyan [1 ]
Wang, Mingrui [1 ]
机构
[1] Shenyang Jianzhu Univ, Sch Comp Sci & Engn, Shenyang, Liaoning, Peoples R China
基金
中国国家自然科学基金;
关键词
human activity recognition; wearable biosensors; sparse transformer; attention; time series;
D O I
10.3389/fphys.2023.1239453
中图分类号
Q4 [生理学];
学科分类号
071003 ;
摘要
Human activity recognition (HAR) has recently become a popular research field in the wearable sensor technology scene. By analyzing the human behavior data, some disease risks or potential health issues can be detected, and patients' rehabilitation progress can be evaluated. With the excellent performance of Transformer in natural language processing and visual tasks, researchers have begun to focus on its application in time series. The Transformer model models long-term dependencies between sequences through self-attention mechanisms, capturing contextual information over extended periods. In this paper, we propose a hybrid model based on the channel attention mechanism and Transformer model to improve the feature representation ability of sensor-based HAR tasks. Extensive experiments were conducted on three public HAR datasets, and the results show that our network achieved accuracies of 98.10%, 97.21%, and 98.82% on the HARTH, PAMAP2, and UCI-HAR datasets, respectively, The overall performance is at the level of the most advanced methods.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Sparse Deep LSTMs with Convolutional Attention for Human Action Recognition
    Aghaei A.
    Nazari A.
    Moghaddam M.E.
    SN Computer Science, 2021, 2 (3)
  • [32] Cattle behavior recognition based on feature fusion under a dual attention mechanism
    Shang, Cheng
    Wu, Feng
    Wang, MeiLi
    Gao, Qiang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 85
  • [33] Fatigue driving behavior recognition based on attention mechanism and dual flow network
    Gong L.N.
    Advances in Transportation Studies, 2024, 1 (Speical issue): : 125 - 138
  • [34] Innovative Framework for Historical Architectural Recognition in China: Integrating Swin Transformer and Global Channel-Spatial Attention Mechanism
    Wu, Jiade
    Ying, Yang
    Tan, Yigao
    Liu, Zhuliang
    BUILDINGS, 2025, 15 (02)
  • [35] Cutting tool wear state recognition based on a channel-space attention mechanism
    Li, Rongyi
    Wei, Peining
    Liu, Xianli
    Li, Canlun
    Ni, Jun
    Zhao, Wenkai
    Zhao, Libo
    Hou, Kailin
    JOURNAL OF MANUFACTURING SYSTEMS, 2023, 69 : 135 - 149
  • [36] Research on Modulation Recognition Algorithm Based on Channel and Spatial Self-Attention Mechanism
    Zhang, Wenna
    Sun, Yunqiang
    Xue, Kailiang
    Yao, Aiqin
    IEEE ACCESS, 2023, 11 : 68617 - 68631
  • [37] Steel surface defect detection based on sparse global attention transformer
    Li, Yinghao
    Han, Zhiyong
    Wang, Wenmeng
    Xu, Heping
    Wei, Yongpeng
    Zai, Guangjun
    PATTERN ANALYSIS AND APPLICATIONS, 2024, 27 (04)
  • [38] Adaptive Attention for Sparse-based Long-sequence Transformer
    Zhang, Xuanyu
    Lv, Zhepeng
    Yang, Qing
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8602 - 8610
  • [39] Adaptive sparse attention-based compact transformer for object tracking
    Pan, Fei
    Zhao, Lianyu
    Wang, Chenglin
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [40] Layer Sparse Transformer for Speech Recognition
    Wang, Peng
    Guo, Zhiyuan
    Xie, Fei
    2023 IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE GRAPH, ICKG, 2023, : 269 - 273